Pretty Good Truth

Human knowledge of reality and truth is mostly relative.

“Real” Reality and “capital T” Truth belong to metaphysics, or perhaps to secret and invisible regions of the natural world, but not to the human mind. We may wish to infer that an absolute, objective level of reality exists, but we must admit that any ultimate reality is largely unknowable to creatures such as ourselves.

The best versions of reality and truth we can capture might be called “Pretty Good Reality” and “Pretty Good Truth.” (I’m thinking of the security program called “Pretty Good Privacy”).

PRETTY GOOD will do for most ordinary purposes.

(BTW, “Pretty Good Reality™”, “Pretty Good Reason“, “Pretty Good Ethics“, “Pretty Good Truth“, and “Pretty Good Free Will™” are copyright by Poor Richard under a Creative Commons Attribution-NonCommercial-NoDe­rivs 3.0 Unported License.)

Anti-relativism

What about those who hyperbolically and hypertensively crusade against relativism (esp. “moral relativism“) usually without the slightest understanding of it?

They wrongly believe that relativism means that “anything goes.”

People who fret about “moral relativism” are children. One cannot explain subtle matters to them.

Relativistic Morality

Everything in human life is relative, but morals, well examined, can be the least relative things of all. The passion to be a moral person can come from a place of true conscience, compassion, integrity, and nobility, or it can come from a place of fear and conformity. But if it is the former, the desire or passion to know reality and the passion to act morally are inseparable. In this kind of pragmatism, the fact of relativity is acknowledged, but one struggles through constant practice to achieve one’s highest and best approach to morality and objectivity. The life of the mind, properly understood, is not unlike the athletic life. Excellence comes from practice on the field, not only from books and chalk board talks.

Non-authoritarian, adult, secular morals are based on some version of the “Golden Rule“, which is about empathy and reciprocity, and on utility (the highest good or greatest well-being for the greatest number) both in respect to people and the entire interdependent community of life.

Empathy is a feeling or sentiment that evolution has given us (in part via “mirror neurons“) to make the fundamental law of reciprocity more agreeable to us. But reciprocity (or the “law of reciprocal maintenance” as G. I. Gurdjieff put it) is an essential law of living systems and ecosystems, with or without empathy or consciousness.

Some people find in the law of reciprocity a justification for living things feeding on one another, but I question that. To fully satisfy the law of reciprocity one would have to be willing to be eaten or otherwise exploited in return. If there were a species of creatures that treated us as we treat the cow, pig, dog, etc.–would we be happy about that? The question of pain is only one factor to consider when imagining how agreeable such a reciprocal relationship would be to us. Would we only be concerned about the process and not the end? Because nature may be in some respects a “war of all against all” (Hobbes), does that limit the human imagination from devising less cruel and violent relationships with nature? Of course not, unless one is simply a dull clod.

Perspective

One aspect of  human relativity is perspectivism. This is a philosophical view developed by Friedrich Nietzsche that all ideations take place from particular perspectives. This means that there are many possible conceptual schemes, or perspectives in which judgment of truth or value can be made. This implies that no way of seeing the world can be taken as definitively “true”, but does not necessarily entail that all perspectives are equally valid.

Simulation

A basic idea of epistemological relativity is that the closest anyone can come to objective reality or truth is some approximation. Each brain creates its own simulation(s) of reality based on its sense inputs, its processing of those inputs, its interpretations, and its own intrinsic characteristics. Since brains can communicate with other brains, multiple perspectives can be combined or merged to some extent.

Wheels within wheels

“…and their appearance and their work was as it were a wheel in the middle of a wheel.” (Ezekiel 1:16)

We are further removed from absolute truth or reality by the possibility that all we know of reality may be part of another level of simulation. Reality may consist of recursive simulations within simulations. This is the “simulation argument” or “matrix hypothesis”.

http://en.wikipedia/simulated_reality :

“A simplified version of this argument proceeds as such:

  1. It is possible that an advanced civilization could create a computer simulation which contains individuals with artificial intelligence (AI).
  2. Such a civilization would likely run many, billions for example, of these simulations (just for fun, for research or any other permutation of possible reasons).
  3. A simulated individual inside the simulation wouldn’t necessarily know that it is inside a simulation — it is just going about its daily business in what it considers to be the “real world.”

Then the ultimate question is — if one accepts that the above premises are at least possible — which of the following is more likely?

a. We are the one civilization which develops AI simulations and happens not to be in one itself?

b. We are one of the many (billions) of simulations that has run? (Remember point 3.) In greater detail, this argument attempts to prove the trichotomy, either that:

  • intelligent races will never reach a level of technology where they can run simulations of reality so detailed they can be mistaken for reality (assuming that this is possible in principle); or
  • races who do reach such a sophisticated level do not tend to run such simulations; or
  • we are almost certainly living in such a simulation.

http://www.simulation-argument.com/

 

In summary I’d say truth is probabilistic by which I mean that we might know an approximation of some fraction of the truth which may be fit (or pretty good) for a particular purpose. To paraphrase George Box, all facts are wrong but some are useful.

Poor Richard

Related:

Proof Positive? (PRA 2.0)

D.I.Y. PHILOSOPHY PRESENTS: TRVTH

Advertisements

5 Responses to “Pretty Good Truth”

  1. n8chz Says:

    I “almost certainly” an expression of “pretty good truth” or “pretty damn good truth?”

    One thing about the simulation argument gets back to the first mover thing some people talk about. There must be an outside somewhere. Directly manipulating the outside would seem to be off limits, unless included as a feature. If not, perhaps communicating with the outside is possible. If the agents in charge are driven by curiosity, they should be all ears. Perhaps they can be persuaded (in the name of curiosity, or something else) to give us some physical access to the world they live in. As for the creation of such simulations (simulacra?), that may be the best way of fishing for methods of determiningexploring the answer to question (a).

    Question: If simulation is like photocopying, you’d expect that quality gets fuzzier the more layers you go inward. But if AI is bootstrappable, i.e. if we, say, are capable of initiating minds more intelligent than our own, perhaps the most ingenious methods of sabotage should come from the innermost layers of the metaphorical onion. But if the hardware support for inner simulations are subsets of their containers, communication as described above must come into play at some point so as to access additional resources. Then there is the question of whether cooperation or rivalry is the most appropriate approach to dealing with the next layer outward. Hopefully the game plan includes penetrating the maze in both inward and outward directions. Oh and sideways, also. What the Unix Masters call ipcs.

    • Poor Richard Says:

      Dear Neightchzie,

      I’m glad you mentioned the derivative, “pretty damned good truth”, which just shows how versatile pretty (blankety-blank) truth is for representing many scales of relativity.

      I created some simulations in my imagination in which I provided the simulated entities a means of communicating with their creator. However, after a few minutes I turned that feature off because the simulated creatures were stupid and annoying.

      Granted, if simulated entities were able to create other simulated entities (simulated AI) which are capable of becoming exponentially more intelligent, those simulated AI entities might eventually have something interesting to say to higher-order agents. Thus the meta-Turing test — if the AI can figure out how to get the attention of the higher-order agents then maybe they have something worth their attention. None of my imaginary simulations has passed this test within my attention span–that is, before I moved on and forgot all about them. Perhaps I did not supply enough resources. So–what?– Does that make me a bad creator?

      It is believed by some that the Unix Masters are objects which have polymorphic abstraction and their inter-process communication methods may suggest certain routes for cross-simulation leakage.

      Good Questions!

      PR

  2. Proof Positive? | Poor Richard's Almanack 2.0 Says:

    […] Pretty Good Truth (PRA 2.0) […]

  3. Sean Says:

    I’m a little concerned with using empathy as a basis for morality. Reciprocity can be relative to interactions (contractualism, which tends to lead towards libertarianism or anarchism), stake in power (social democracy), or money (communism). Mostly, we accept a blend of the three. Empathy, unfortunately, is often not about reciprocity at all, but about the mother bear answering the baby cub’s cry. It is the promotion of the vulnerable, and tending to their care at any cost.

    But I view all these as heuristics in managing utility.

    For your consideration:

    • Poor Richard Says:

      Good points, Sean. Empathy is “fast thinking” and much more subject to errors and special circumstances that limit generalization. The rational compassion approach is slow thinking, hopefully best when time allows.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: