Lunar shading anomaly?

Over the past two weeks I have observed the sun and the moon in the sky together on three or four different afternoons.  It isn’t uncommon to see the moon during the day, but on these recent occasions I happened to notice something I never noticed before. Something was “wrong” with the picture. Can you see the anomaly in my drawing? Do you have a plausible explanation?

What’s wrong with this picture?

The typical human brain has a visual processing engine that automatically analyzes the relations between illuminated surfaces and shadows and intuitively computes the angle of illumination (direction(s) light would be coming from to produce these effects). Artists usually make light and shadowed areas congruent with logical or plausible sources of illumination, even if the those sources are not included within the picture.

So the scene above grabbed my attention very quickly and prompted me to look at the afternoon sky again for several days in a row.  As the moon grew more full, the discrepancy between the lunar shading and the location of the sun became a little less obvious, but it was still bothersome.

Once the sun sets, leaving only the moon in the sky, the problem is also less obvious. But when they are both up there in the big, blue sky at the very same time, directly opposite each other in the same contiguous volume of clear, empty space, the cognitive dissonance created by the mismatch between the earth-sun-moon angle and the orientation of the shading on the moon is almost maddening.

Its just wrong.

What gives?

Poor Richard

[note: unfortunately I didn’t have a digital camera with which to document this anomaly, but my drawing is schematically very close to what I actually observed on several days this September. Ironically, one of the afternoons in question was on September 22, the date of this year’s International Observe the Moon Night! If you have any photos of this anomaly (showing sun and moon together along with lunar shading that seems inconsistent with the earth-sun-moon angle) or know of any such photos online, please leave a comment with pics or links.]

Related:

Images:

“Spot Lights” (click image for source)

Lookback Homeward, Angel

Time Line of the Universe, Credit: NASA/WMAP Science Team

The farther away in the cosmos we look, the farther into a faster expanding *past* of the universe we see, not a faster expanding future.

We wrongly believe the expansion of our universe is accelerating because the red shift of distant stars increases with distance. However, Lookback time, the time it takes starlight to reach us, should lead us to interpret red shift evidence in the opposite direction, i.e. the older the light we receive, the greater the shift and the higher the speed of recession. If older = redder = faster then the recession is slowing down.

Since the canonization of Hubble’s Law, the vast majority of scientists have accepted that the Visible Universe  is expanding. But is the expansion accelerating or slowing down? Observations and calculations based on the standard interpretation of Hubble’s Law  increasingly indicate an accelerating rate of expansion, necessitating the invention of a mysterious “dark energy” to explain such counter-intuitive, gravity-defying observations. However, I will offer an argument for decelerating expansion that is consistent with modern observations but which depends on an unconventional interpretation of Hubble’s Law. Wikipedia introduces Hubble’s Law as follows:

Hubble’s law is the name for the astronomical observation in physical cosmology that: (1) all objects observed in deep space (intergalactic space) are found to have a doppler shift observable relative velocity to Earth, and to each other; and (2) that this doppler-shift-measured velocity, of various galaxies receding from the Earth, is proportional to their distance from the Earth and all other interstellar bodies. In effect, the space-time volume of the observable universe is expanding and Hubble’s law is the direct physical observation of this process.[1] It is considered the first observational basis for the expanding space paradigm and today serves as one of the pieces of evidence most often cited in support of the Big Bang model.”

English: Doppler shift due to stellar wobble c...

Doppler shift due to stellar wobble caused by an exoplanet (Photo credit: Wikipedia)

All this is inferred from observations that waves striking an observer appear higher in frequency if the observer and source are getting closer, and lower in frequency if the observer and source are getting farther away from each other. The amount of this frequency shift (known as a Doppler shift or Doppler effect) in light waves is proportional to the speed at which the source and observer are moving relative to each other. If the observed frequency of light is increased, as when the source and observer are getting closer, this is called a blue shift. If the frequency is decreased, as when a light source and observer are getting farther away from each other, this is called a red shift. Hubble’s Law is based on two different categories of measurements made by astronomers that appear to always have a constant proportion to each other, called Hubble’s Constant (H0). We typically use the Hubble’s Constant ratio to determine the velocity (v) of objects like stars and galaxies that we see in deep space. The H0 ratio involves:

  1. the distance (D) of extra-galactic light sources. This is inferred by various methods of contrasting their “true” brightness or luminosity with their “apparent” brightness (the greater the difference in these values, the farther away an object is thought to be)
  2. the direction of motion and velocity (v) of the light sources relative to us. This is inferred from the frequencies of the light we receive (sources with higher red shifts are thought to be moving away from us at higher speeds).

Hubble’s Law is sometimes expressed by the equation v = H0D. Although Hubble’s Law is often cited (perhaps for the sake of simplicity) as evidence that “all” galaxies have been mutually moving away from each other for about 14 billion years, we know that exceptions exist. In some cases galaxies still remain close enough together to gravitationally affect each others shapes and motions. Some appear to be headed on collision courses, or to currently be in the process of colliding with each other. Our galaxy is expected to collide and merge with the Andromeda galaxy in about 4.5 billion years. However, it is still thought that the great majority of galaxies are moving away from the majority of other galaxies with the exception of those which are close enough (in galactic groups or clusters) for their mutual gravitational attractions to have overcome the forces which are spreading most galaxies out and away from each other. The primary cause of the apparent general movement of galaxies away from one another is usually considered to be some sort of “Big Bang.”

According to the Big Bang model, the Universe expanded from an extremely dense and hot state and continues to expand today. A common analogy explains that space itself is expanding, carrying galaxies with it, like spots on an inflating balloon. The graphic scheme above is an artist’s concept illustrating the expansion of a portion of a flat universe. (image and description from Wikipedia: Big Bang)

However,  all this is not without plenty of uncertainty and controversy. In the article on Doppler Shift, Wikipedia says:

“For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect may therefore result from motion of the source, motion of the observer, or motion of the medium. Each of these effects are analyzed separately. For waves which do not require a medium, such as light or gravity in general relativity, only the relative difference in velocity between the observer and the source needs to be considered.”

English: Illustrates cumulative absorption spe...

Light from a distant quasar passes through numerous intervening gas clouds in galaxies and in intergalactic space. These clouds subtract specific colors from the beam. The resulting “absorption spectrum,” recorded by Hubbles Space Telescope Imaging Spectrograph (STIS) determines the distances and chemical composition of the invisible clouds. (Photo credit: Wikipedia)

But this ignores the fact that the speed and frequency (color) of light may be affected by various materials (such as water or intergalactic gas) or fields of force (such as magnetic, gravitational, or perhaps even Higgs fields) through which it may pass. We generally assume intergalactic space to be a near-perfect vacuum in most places–in other words, we assume  that any known or unknown medium or media, if  present, will not have altered the generally isotropic, universe-wide observations of light reaching us from distant galaxies in a way that would significantly alter the cosmological principle or the calculations underpinning Hubble’s Law. Is that a reasonable assumption? In billions of years of travel, how much absorption & re-radiation, filtering, gravitational lensing, etc. is the average galactic image subjected to? The great variety of effects upon light produced with simple glass lenses and even pinholes in paper should make us very humble about interpreting the radiation we get from deep space, which has passed through “only-god-knows-what” in its very long journey. In fact, many of the assumptions involved in measuring the brightness, distance, velocity, and red shift of distant objects and the properties of galaxies and intergalactic space are controversial, resulting in values for Hubble’s Constant, for example, that vary from 50 to 100 (km/s)/Mpc. As a result of such discrepancies,  some scientists think the universe will expand forever and some think it will ultimately stop expanding and start collapsing under the force of mutual gravitational interaction. Under most current interpretations, the latest data (especially that which concerns “standard candles” of brightness such as type 1a supernovae) increasingly seem to favor unending expansion. But is there an alternate interpretation that might fit the latest measurements and even hold up irrespective of any intervening deep-space materials or forces either known or as yet unknown? I will suggest one presently.

“The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterized by values of density parameters (ΩM for matter density and ΩΛ for dark energy). A “closed universe” with ΩM > 1 and ΩΛ = 0 comes to an end in a Big Crunch and is considerably younger than its Hubble age. An “open universe” with ΩM ≤ 1 and ΩΛ = 0 expands forever and has an age that is closer to its Hubble age.” (image and description from Wikipedia, Hubble’s Law)

This, so far, is a very brief and simplified summary of Hubble’s Law and some of its implications, leaving out many details and issues. I am not qualified to weigh in on the controversies surrounding any of the variables and measurements briefly outlined above. But I do have some bones to pick with every discussion of Hubble’s law and (by extension) every discussion of big bang and expansion cosmology I have read in the past 45 years. The bone that always catches in my craw is the damned accelerating expansion thing. And I have some other questions related to time and space which, if they remain unanswered, render any of the cosmological models I know of so ambiguous as to be meaningless. If the expansion of the universe is a relativistic expansion of a space-time continuum, what evidence do we have that such a relativistic expansion would produce any measurable change in the distances between galaxies? All our metrics of distances between cosmic objects rely on the distance traveled by radiation in a given unit of time. If space and time are both expanding, what are we actually measuring if we measure one in terms of the other? And what if the speed of light has changed over the course of deep, cosmic time? What if we could measure the distance between earth and a receding galaxy with a physical yardstick? If galaxies are receding because of the expansion of space-time itself, wouldn’t the space between the atoms and molecules in the yardstick expand as well, at least once it reached out beyond the space-time-compacting gravity of Earth? (And BTW, how might we distinguish, mathematically, the gravity of Earth from the effects of acceleration that might be created by an expanding Earth?) The usual answer to such conundrums is that local forces dominate between nearby atoms and molecules, overriding the universal expansion which occurs at large scales. Still, if the space between atoms and molecules were expanding,  in lock-step with other universal changes, how would we know? But such matters are not the real topic of this essay. I mention them only to suggest the degree to which our current cosmological narratives are underpinned by assumptions which are, as yet, neither provable nor falsifiable. For the most part, however, the general correlation between distance and red-shift and the correlation between red-shift and velocity seem relatively uncontroversial: Relatively Uncontroversial Fact #1: In general, the greater the apparent red-shift of an object, the faster it is moving away from us (and we from it). Relatively Uncontroversial Fact #2: In general, the greater the apparent distance of a light-producing object from Earth, the greater its apparent red-shift. Understandably, most everyone draws the conclusion from these two facts that the expansion of the universe has been accelerating and still is. As far as I can tell, the “dark energy” and “open universe” hypotheses have no other foundations aside from the standard interpretations of Hubble’s law, i.e. that red-shift appears to increase with distance and distance with time. If red-shift increases with distance and distance increases with time, then by simple, deductive reasoning red-shift increases with time; or in other words, the expansion of the universe is accelerating. The logic:

  1. Socrates is a man.
  2. All men are mortal.
  3. Therefore, Socrates is mortal.

But what if, unknown to us, some men were potentially immortal? or what if Socrates was an extraterrestrial space alien rather than a “man”? (Cue Twilight Zone theme song …) Since the amount of mass in the universe, even including dark matter,  currently appears too small to counteract the rate of acceleration computed according to our standard interpretation of Hubble’s Law, the universe appears to be “open” (i.e. it will continue to expand “forever”). But so far our only explanation for such a startling state of affairs is based on cosmic inflation and dark energy. These are “dark hypotheses,” little better than “insert explanation later” place-holders. On the other hand, what if  we drew the opposite conclusion from Fact 1 and Fact 2 above; concluding that the expansion of the universe is decelerating, based on Relatively Uncontroversial Fact 3: the greater the distance of an object from Earth, the longer it takes its light to reach us. This is the lookback time (tL). When we receive light from an object that is one billion light-years away we see, by definition, a signal emitted one billion years ago. If, from its apparent red-shift, we drew a conclusion about the relative velocity at which the Earth and the source are moving away from each other in the present moment, we would be getting information across a one-billion-light-year distance instantaneously, wouldn’t we? Is that possible, or does that contradict basic physics and information theory in the sense of  information about the relative velocity of earth and distant objects being transmitted to us faster than the speed of light? If instead we interpreted that same red-shift data as information that is one billion years old, it tells us the speed and direction of the light source and observer one billion years ago rather than at the moment that the data is collected and measured by us. This seems more consistent, IMHO,  with known physics and information theory, and it would logically reverse the usual correlation of expansion speed with time, providing evidence for a decelerating expansion. Instead of v = H0D, an alternative equation for Hubble’ Law for cases when lookback time (tL) is significant might be:

 v = H0 /D

or

     v = H0/tL  since in this expression distance (D) = Lookback time (tL)

Does the apparent red shift of deep space objects represent the relative velocity between the light source and Earth at the time of observation, as is generally assumed; or the relative velocity at the time of emission, as I propose; or some third possible (but mysterious) value somewhere between those two? Apparently, the scientific community sees no problem so far with the standard expressions and interpretation of Hubble’s Law and  I have not encountered any discussion of an information theory paradox or any problem with the standard interpretations of the Doppler effect for cases involving significant lookback times. Can this be explained by the fact that we have no exacting standards of reference or controlled experiments involving significant amounts of lookback time? After all, it is very difficult to combine high relative velocities and significant amounts of lookback time in tightly-controlled experiments on earth and get results that exceed the margin of error. One issue raised by my hypothesis concerns the basic definition of relative motion in cases that involve significant amounts of lookback time. We usually consider a relative velocity or relative direction to specify relations between two or more objects at a given instant in time. When lookback time is significant, then by definition we have incomplete data about any possible changes in relative motion between the time an electromagnetic signal is emitted and the time it is received. When we measure relativistic changes in mass, length, or time-keeping in things that are rapidly orbiting the Earth (like atomic clocks on GPS satellites or changes to particles in a big accelerator) we are dealing with direct, well-controlled comparisons–for example, the speed of  identical clocks on earth and in orbit, or particles with precisely measurable energies and velocities. In such cases we have clear controls and standards of reference. Even then, experimental results can be ambiguous and even baffling, such as the results in state-of-the-art versions of the double-slit experiment which suggest that the nature of quantum reality (i.e. particle vs wave interference effects) can be changed retroactively by successive observations. But when we compare the velocities of very distant, very ancient stars and galaxies, and the “proper” vs apparent Doppler-shifted frequencies of their radiation, do we really have clear controls and standards of reference? Scientists go to amazing, heroic, and often very convoluted lengths trying to estimate extra-galactic distances, luminosities, velocities, frequencies, and frequency shifts. Yet it appears to me that the variables are often difficult, if not impossible, to define in non-circular terms; and the best estimates of true, absolute, or “proper” values are still only educated guesses in may cases. However, it seems to me that we might test the standard and alternative (lookback-significant) equations for Hubble’s Law with precision-engineered lasers on high-orbit satellites that could change their relative velocities in very controlled amounts between the times a signal of known “proper” frequency (frequency minus any Doppler effect) was emitted and the time that a shifted signal was received. I am not aware that any such experiments aimed at confirming the relationship between lookback time and the Doppler effect have ever been performed or even proposed, despite the fact that countless experiments with rapidly-moving satellite-based  electromagnetic and optical instruments at one or both ends have been performed for many other purposes. IMHO such experiments might also reveal (or perhaps even calibrate) other aspects of the relationships between space, time, motion, and gravity that still remain ambiguous long after Einstein. So far, the only controlled space-time calibrations I’ve heard of concern satellite clocks or particle accelerators. We know that things like Doppler effects and gravitational lensing effects occur in deep space, but knowing how to measure and interpret them correctly for deep-space objects is still more a “dark” art than a science. But I get all my astrophysics from the popular media, so WTF do I know?

Poor Richard

Related:

Stalking the big-brained baby

I have a hypothesis that the era of C-section baby deliveries could lead to the advent of a new big-brain phenotype.

Big-brained Baby?

Consider: the brain has been struggling to get bigger for about the last 100,00 years (since we reached anatomical modernity) but it can’t because we long-since reached the limits of brain & skull-case size that could fit through the human birth canal!

Of course, the birth canal will gradually get bigger–at maybe .1 cm per zillion years.

Meanwhile, enter the c-section. Now a fetus with an abnormally large brain case doesn’t have to get through the birth canal in one piece and unmangled anymore. Because of its delivery risks, discovered in prenatal monitoring,  it will routinely get a c-section and survive. More mothers who gestate big-brained babies will survive, too, possibly to have more big-brained babies (bbb’s)!

OK, so where are these bbb’s? Who knows! Nobody is looking for them!

We need to start systematically looking for these big-brained c-section babes in all the hospital delivery rooms. When we find them we need to get them into a controlled, double-blind research program designed as follows:

An equal number of big-brained and average-brained babies will all be given the same intervention and surveillance. The identities of which babies are bbb’s and which are average will be sealed in a vault for the duration of the study. All babies are treated exactly the same, but the treatment is very, very good.

No, we don’t take the babies away from their moms and hold them in an underground lab, but they go into a very enriched development program using all the latest neurodevelopmental enhancement methods. In other words, damn good schooling. That way they can all get the most consistent treatment throughout the program, which might last from 3 to 12 years, depending on funding. The parents would also be counseled and educated to promote the best possible development of the kids, and they would be paid lavishly so at least one parent won’t need to work and can be a dedicated and highly trained care-giver.

By rounding up these bbb’s and their moms and dads (figuratively) and giving them the best possible care, we accomplish some slight-of-hand eugenics: we help accelerate the emergence and proliferation of the bbb phenotype in the most benign way possible, but without that being an explicit objective.

Huh? Why not?

Poor Richard

Baby Big Head (click for source)

Bilogical analogs in the workplace

Statue of Marx and Engels from the Szoborpark,...

Image via Wikipedia

Response to The swarm as a method of work organisation (P2P Foundation blog)

Excerpted from Bob Cannell:
“A 2006 European study found the primary cause of degeneration of worker coops was capture by experts who come to dominate and control information. Creating controllers is not safe in worker owned or cooperative business.”

This is an interesting observation and I think there may be an important issue to explore.

Humans share many genes with other social animals. One thing we can observe in many social species is the way that “status” genes can be turned on by social circumstances. In many species when an “alpha” individual is lost by the pack or herd, a formerly subordinate individual will fill that role. Not only does the behavior of such an individual change, but in many cases there are physiological and morphological changes that can accompany such status changes even in fully developed adult individuals. This may be mediated by epigenetic mechanisms.

It may be that humans (perhaps some more than others) are similar in that respect. Put some people into a group of cooperating peers where there is no alpha individual and this may actually trigger something in them to assume an alpha role.

In humans it is especially difficult to distinguish between psychological, genetic, and environmental triggers for behavior, and my point is not to make a case for genetic determinism. I am only suggesting that the variety of unconscious and involuntary forces that might affect human competitiveness and status-related behavior can run very, very deep.

If leaders, controllers, experts, etc. are dangerous for cooperative peer groups, it may take a lot more than peer pressure or ideology to suppress the tendency of humans to express such phenotypes.

It occurs to me that we might try to incorporate environmental stimuli in the workplace that would somehow inhibit any tendency for alpha traits to emerge and drive individuals to fill status roles that are vacant by intent–if there were some kind of artificial “decoy” alpha in the room, for example. Perhaps a magnificent animated statue of Marx that would occasionally…

Poor Richard

SPECIAL FUKUSHIMA NUCLEAR UPDATE

If you love this planet,

PLEASE LISTEN TO THIS INTERVIEW:

The interview goes back to the beginning of the disaster but Keep Listening. You’ll hear recent info that is startling and credible. Arnold Gundersen, a former nuclear engineer, explains that Reactor 2’s fuel has probably partially eaten its way out of the reactor and molten fuel is likely eating its way through the concrete floor of the reactor building.

Dr. Helen Caldicott and  Arnold Gundersen

SPECIAL FUKUSHIMA UPDATE INTERVIEW

Related videos:

Fukushima Updates, (Arnold Gundersen) Includes some excellent video tours of the Fukushima site.

The Tokyo Electric Power Company (TEPCO) has reported that seawater containing radioactive iodine-131 at 5 million times the legal limit has been detected near the plant. According to the Japanese news service, NHK, a recent sample also contained 1.1 million times the legal level of radioactive cesium-137. (e360.yale.edu)

Nuclear Power Plant Updates (guardian.co.uk)

Fukushima Nuclear Power Plant Data  for Thursday, April 7, 2011

Download this data

.

Reactor 1 2 3 4 5 6

.

Electric / Thermal Power output (MW) 460 / 1380 784 / 2381 784 / 2381 784 / 2381 784 / 2381 1100 / 3293

.

Type of Reactor BWR-3 BWR-4 BWR-4 BWR-4 BWR-4 BWR-5

.

Operation Status at the earthquake occurred In Service -> Shutdown In Service -> Shutdown In Service -> Shutdown Outage Outage Outage

.

Core and Fuel Integrity Damaged (400) 70% Damaged (548) 30% Damaged (548) 25% No fuel rods Not Damaged (548) Not Damaged (764)

.

Reactor Pressure Vessel Integrity Unknown Unknown Unknown Not Damaged Not Damaged Not Damaged

.

Containment Vessel Integrity Not Damaged (estimation) Damage and leakage Suspected Not Damaged (estimation) Not Damaged Not Damaged Not Damaged

.

Core cooling requiring AC power 1 (Large volume injection of plain water) Not Functional Not Functional Not Functional Not neccessary Functional Functional

.

Core cooling not requiring AC power 2 (Cooling through Heat Exchangers) Not Functional Not Functional Not Functional Not neccessary Functioning (in cold shutdown) Functioning (in cold shutdown)

.

Building Integrity Severely Damaged (Hydrogen Explosion) Slightly Damaged Severely Damaged (Hydrogen Explosion) Severely Damaged (Hydrogen Explosion) Open a vent hole on the rooftop for avoiding hydrogen explosion Open a vent hole on the rooftop for avoiding hydrogen explosion

.

Water Level of the Rector Pressure Vessel Fuel exposed Fuel exposed Fuel exposed Safe Safe Safe

.

Pressure of the Reactor Pressure Vessel Gradually increasing/ Decreased a little after increasing over 400C on 24th Unknown / Stable Unknown Safe Safe Safe

.

Containment Vessel Pressure Decreased a little after increasing up to 0.4Mpa on 24th Stable Stable Safe Safe Safe

.

Water injection to core (Accident Management) Continuing (Switch from seawater to Freshwater) Continuing (Switch from seawater to Freshwater) Continuing (Switch from seawater to Freshwater) Not neccessary Not neccessary Not neccessary

.

Water injection to Containment Vessel (AM) TBC TBD (Seawater) TBC Not neccessary Not neccessary Not neccessary

.

Containment venting (AM) Temporally stopped Temporally stopped Temporally stopped Not neccessary Not neccessary Not neccessary

.

Fuel Integrity in the spent fuel pool (Stored spent fuel assemblies) Unknown (292) Unknown (587) Damage suspected (514) Possibly Damaged (1331) Not Damaged (946) Not Damaged (876)

.

Cooling of the spent fuel pool Water spray started (freshwater) Continued water injection (Switch
from seawater to Freshwater)
Continued water spray and injection
(Switch from seawater to Freshwater)
Continued water spray and injection
(Switch from seawater to Freshwater) Hydrogen from the pool exploded on Mar. 15th
Pool cooling capability was recovered. Pool cooling capability was recovered.

.

Main Control Room Habitability & Operability Poor due to loss of AC power (Lighting working in the control room at unit-1 and 2) Poor due to loss of AC power (Lighting working in the control room at unit-1 and 2) Poor due to loss of AC power (Lighting working in the control room at unit-3 and 4) Poor due to loss of AC power (Lighting working in the control room at unit-3 and 4) Not damaged (estimate) Not damaged (estimate)

.

INES LEVEL (est by NISA) 5 5 5 3

.

Radiation level Status in Fukushima Dai-ichi NPS site
Radiation level: 0.70mSv/h at the south side of the office building, 46μSv/h at the West gate, as of 09:00, Apr. 7th, 108μSv/h at the Main gate, as of 10:00, Apr. 6th.
Radiation dose higher than 1000 mSv was measured at the surface of water accumulated on the basement of Unit 2 turbine building and in the tunnel for laying piping outside the building on Mar. 27th.
Plutonium was detected from the soil sampled at Fukushima Dai-ichi NPS site on Mar. 21st, 22nd, 25th and 28th. The amount is so small that the Pu is not harmful to human body.
Radioactive materials exceeding the regulatory limit have been detected from seawater sample collected in the sea surrounding the Fukushima Dai-ichi NPS since Mar. 21st. On Apr. 5th, 7.5 million times the legal limit of
radioactive iodine, I-131, was detected from the seawater, which had been sampled near the water intake of Unit 2 on Apr. 2nd. It was found on Apr. 2nd that there was highly radioactive (more than 1000mSv/hr) water in the
concrete pit housing electrical cables and this water was leaking into the sea through cracks on the concrete wall. It was confirmed on Apr. 6th that the leakage of water stopped after injecting a hardening agent into holes
drilled around the pit. Release of some 10,000 tons of low level radioactive wastewater into the sea began on Apr. 4th, in order to make room for the highly radioactive water mentioned above. Regarding the influence of the
low level radioactive waste release, TEPCO evaluated that eating fish and seaweed caught near the plant every day for a year would add some 25% of the dose that the general pubic receive from the environment for a year.
Monitoring for the surrounding sea area has been enhanced since Apr. 4th.
Radioactive materials were detected from underground water sampled near the turbine buildings on Mar. 30th.
●Influence to the people’s life
Radioactive material was detected from milk and agricultural products from Fukushima and neighboring prefectures. The government issued order to limit shipment (21st-) and intake (23rd-) for some products.
Radioactive iodine, exceeding the provisional legal limit, was detected from tap water sampled in some prefectures from Mar. 21st to 27th.
Small fish caught in waters off the coast of Ibaraki on Apr. 4 have been found to contain radioactive cesium above the legal limit on Apr. 5th. It was decided on Apr. 5th that as a legal limit of radioactive iodine, the same
amount for vegetbles should be applied to fishery products for the time being.

.

Evacuation <1> Shall be evacuated for within 3km from NPS, Shall stay indoors for within 10km from NPS (issued at 21:23, Mar. 11th) <2> Shall be evacuated for within 10km from NPS (issued at 05:44, Mar. 12th)
<3> Shall be evacuated for within 20km from NPS (issued at 18:25, Mar. 12th) <4> Shall stay indoors (issued at 11:00, Mar. 15th), Should consider leaving (issued at 11:30, Mar. 25th) for from 20km to 30km from NPS

.

Remarks ●Progress of the work to recover injection function
Water injection to the reactor pressure vessel by temporally installed pumps were switched from seawater to freshwater at Unit 1, 2 and 3.
High radiation circumstance hampering the work to restore originally installed pumps for injection. Discharging radioactive water in the basement of the buildings of Unit 1through 3 continue to improve this situation. Water
transfer work is being made to secure a place the water to go. Lighting in the turbine buildings became partly available at Unit 1through 4.
●Function of containing radioactive material
It is presumed that radioactive material inside the reactor vessel may leaked outside at Unit 1, 2 and Unit 3, based on radioactive material found outside. NISA announced that the reactor pressure vessel of Unit 2 and 3 may
have lost air tightness because of low pressure inside the pressure vessel. NISA told that it is unlikely that these are cracks or holes in the reactor pressure vessels at the same occasion.
TEPCO started to inject nitrogen gas into the Unit 1containment vessel to reduce the possibility of hydrogen explosion on Apr. 6th. The same measure will be taken for Unit 2 and 3.
●Cooling the spent fuel pool
Steam like substance rose intermittently from the reactor building at Unit 1, 2, 3 and 4 has been observed. Injecting and/or spraying water to the spent fuel pool has been conducted.
●Prevention of the proliferation of contaminated dust: Testing the spraying synthetic resin to contain contaminated dust began on Apr. 1st.

Wilhelm Reich

Cover of "The Murder of Christ (The Emoti...

Cover via Amazon

Remarks about the video

Man’s Right To Know Part 1 of 3

I’ve read most of Wilhelm Reich‘s work that’s in print and consider him a world-class philosopher, psychoanalyst, and experimental psychologist. He was at one time one of Freud’s star pupils and he did some of the first lab research on sex and libido.

Wikipedia (Wilhelm Reich) says: “Reich’s Character Analysis was a major step in the development of what today is called ego psychology. In Reich’s view, a person’s entire character, not only individual symptoms, could be looked at and treated as a neurotic phenomenon. The book also introduced his theory of body armoring. Reich argued that unreleased psycho-sexual energy could produce actual physical blocks within muscles and organs, and that these blocks act as a body armor preventing the release of the energy. An orgasm was one way to break through the armor. These ideas developed into a general theory of the importance of a healthy sex life to overall well-being, a theory compatible with Freud’s views. His idea was that the orgasm was not simply a device to aid procreation, but was the body’s emotional energy regulator. The better the orgasm, the more energy was released, meaning that less was available to create neurotic states. Reich called the ability to release sufficient energy during orgasm “orgastic potency,” something that very few individuals could achieve, he argued, because of society’s sexual oppression. A man or woman without orgastic potency was in a constant state of tension, developing a body armor to keep it in. The outer rigidity and inner anxiety is the state of neurosis, leading to hate, sadism, greed, fascism and antisemitism.[23]”

While some technical details of Reich’s work on Character Analysis are somewhat dated and/or idiosyncratic, he was a great pioneer of this line of psychology. In fact, many of his ideas are still waiting to get the attention and further research I think they deserve.

I part ways with him on “Orgone energy”, however. Much of the sexual/biological energy Reich observed is in fact neurochemically produced and conducted. Orgone theory is largely superfluous. Many of his speculations were not irrational (being based on an electromagnetism-like metaphor before the discovery of neurochemistry) but neither were they borne out. In fact, Reich was far ahead of contemporaries such as Freud and Jung with his emphasis on physiology and on empirical, scientific experimentation. He was simply too early for the lab technology he needed. Some of his questions about biological energy are still unanswered and deserve to be researched today. He was a genius who got over-excited about a particular line of inquiry and his reach exceeded his grasp.

On the other hand, his “Mass Psychology of Fascism” deserves to be read by every psychologist, social/political scientist, and all students of human nature and the human condition. His studies of authoritarianism and sexual repression and metaphors such as “the emotional plague” and “the murder of christ” have enormous utility in understanding how authoritarian pathology is transmitted from one generation to the next.

The video (part 1 of 3) doesn’t seem to make any distinction between the value of his psychological work and his Orgone obsession. I have not viewed the other two parts.

Poor Richard

Q. What is solar power?

Solucar PS10 is the first solar thermal power ...
Image via Wikipedia

A. Solar power is a simple, sensible way to boil water and generate electricity (among other things). By contrast, nuclear power and fossil fuels are insane ways to boil water and generate electricity (or anything else).

Myth: Solar power is practical only in outer space.

Fact: Solar power is practical today at most places on the earth’s surface.

“Over the course of a year the average solar radiation arriving at the top of the Earth’s atmosphere is roughly 1,366 watts per square meter[2][3] (see solar constant). …The Sun’s rays are attenuated as they pass though the atmosphere, thus reducing the insolation at the Earth’s surface to approximately 1,000 watts per square meter for a surface perpendicular to the Sun’s rays at sea level on a clear day.” (http://en.wikipedia.org/wiki/Insolation) The sunlight available at the surface is MANY, MANY orders of magnitude greater than required for civilization’s power needs. Why go to orbit for something that is available in vast surplus on the ground? The same goes for obsessing about the efficiency of solar conversion for terrestrial applications. In space it matters–on the ground its often moot. It only affects the surface area requirement, which in many terrestrial applications is not a limiting factor.

(Picture above: “Concentrated Solar Thermal Power – The map shows how large of an area of desert would have to be covered with mirrors to produce all the electricity currently used by Europe (small square) or the entire World (larger square). Concentrated Solar Thermal could replace all nuclear power plants tomorrow. The sun’s ray are concentrated to produce high enough heat to drive steam turbines. This is the same technology as used in nuclear power plants, where the heat is supplied by highly radioactive uranium/plutonium fuel rods, except that here the heat is supplied directly by the sun’s rays…The technology is available. All we need is the political will to finance this one instead of the fossil fuel (& nuclear) based plants used today. Photo and text from Sepp Hasslberger’s Photos – Wall Photos on FaceBook. )

Myth: Solar power is a futuristic high-technology that requires more R&D to become a practical and competitive industry

Fact: Solar energy has been harnessed since ancient times. Ancient accounts also mention water-lifting devices and moving mechanical statues that were powered by the sun’s heat. Concentrated sunlight powered some of the earliest steam engines in the 19th century.

“As early as 212 BC, the Greek scientist, Archimedes, used the reflective properties of bronze shields to focus sunlight and to set fire to wooden ships from the Roman Empire which were besieging Syracuse. (Although no proof of this feat exists, the Greek navy recreated the experiment in 1973 and successfully set fire to a wooden boat at a distance of 50 meters.)”

“100 AD to 400 AD – For three hundred years, the Romans use solar power to heat waters in bath houses.

“On September 27, 1816, Robert Stirling applied for a patent for his economiser at the Chancery in Edinburgh, Scotland.  This engine was later used in the dish/Stirling system, a solar thermal electric technology that concentrates the sun’s thermal energy in order to produce power.”

Mouchout engine -click image for source

“The first active solar motor was invented by Auguste Mouchout in 1861 and utilized solar power to entirely provide a fuel source for a steam engine.  In the same period, scientists in Europe developed large cone-shaped collectors that could be used to produce locomotion and refrigeration based upon the heating of ammonia. ”

“In the United States during the Civil War Swedish-born John Erickson, the famed inventor of the USS Monitor that greatly assisted the Union in naval battles, was also able to develop a trough collector that could function in a similar way that many solar cells developed nearly one hundred years later do.

1913-The first solar thermal power station

“Frank Shuman built the world’s first solar thermal power station in Meadi, Egypt. Shuman’s plant used parabolic troughs to power a 60-70 horsepower engine, which pumped 6,000 gallons of water per minute from the Nile River to adjacent cotton fields. Identical technology with relatively minor refinements is used today. The first electric dynamo capable of delivering power for industry was  built by Hippolyte Pixii in 1832. Thus, engineers had mastered every technology needed to generate unlimited mechanical and electric power anywhere in the world over 100 years ago. There is one reason the world does not run entirely on solar power today — the captains of industry from the 1830’s until now have invested their capital in coal, oil, gas, and nuclear energy for one reason: they could minimize competition and control availability. They could not control the availability and profitability of sunlight.

Their personal gain is mankind’s loss.

“We are like tenant farmers chopping down the fence around our house for fuel when we should be using Nature’s inexhaustible sources of energy — sun, wind and tide. … I’d put my money on the sun and solar energy. What a source of power! I hope we don’t have to wait until oil and coal run out before we tackle that.” Thomas Edison in conversation with Henry Ford and Harvey Firestone (1931); as quoted in Uncommon Friends : Life with Thomas Edison, Henry Ford, Harvey Firestone, Alexis Carrel & Charles Lindbergh (1987) by James Newton, p. 31

The most amazing thing about solar-thermal technology is that the basic parts involved existed several thousand years ago–they just weren’t combined in the smartest way until the 20th century. If a knowledgeable craftsman had traveled back in time (as Mark Twain’s Connecticut Yankee did) to King Arthur’s court, he could have built a concentrated solar-thermal power plant with the tools and materials of that day. All we are talking about is 1) mirrors, 2) plumbing, 3) a steam engine, and 4) a basic electric generator. Home-made examples of the latter two items are common in high-school science fairs.  Yet these systems, engineered to full scale and modern specifications, are capable of replacing coal and nuclear power plants, can be constructed by an existing class of contractors in less time than a coal-burning plant, and once built could operate safely with zero pollution for hundreds of years. These facts suggest that higher education has become principally a conduit of disinformation in the service of our corporate masters.

solar power station 1913 – click image for source

(List compiled from US Department of Energy – Solar History Timeline, Solar energy history by Max Rutherford, and A brief pictorial history of solar powered technology.)

Myth: Large solar power plants take up too much land, can only be situated in desert areas, and require inefficient, impractical, or unavailable energy storage and long-range transmission technologies.

Fact: Large scale solar power stations can be placed anywhere that cost-benefit constraints are met. They can be placed on any land unsuitable for other uses. Various designs are compatible with mixed land uses. They have the added placement flexibility over most other generating methods that no water supply is required and no public hazard is created. No evacuation plans or discharge scrubbers are required. No fuels must be mined or processed at enormous cost in land and energy resources. No large energy storage capabilities are required in countries with modern electrical distribution grids.

Myth: Solar power is inefficient and cost-prohibitive.

Fact: Its true that the solar-electric “panel” or “module” for stationary terrestrial use is largely a 1960’s-era technology that is nearly obsolete or inappropriate for many residential and commercial applications today. They are typically heavy, bulky, resource intensive, manufacturing intensive, and expensive at perhaps $5.00 per watt. Such panels are a pet “straw man” of the energy industry used to divert public attention from the appropriate solar power technologies of the present. The true state-of-the-art solar technologies for most  terrestrial applications (including charging electric vehicles) today are:

1. Solar-electric power production integrated into building materials: The entire exterior envelop of residential and commercial buildings including roofing, siding, and window glazing can be constructed from materials and coatings with solar-electric generating capability built in. Some interior surfaces can also be used to convert ambient interior light into electricity. The use of this technology is limited only by investments in manufacturing capacity and the active obstruction of the legacy energy industry.

Ancient Roof Tiles Can Solar Power Your Entire Home

These gorgeous Solé Power Tiles are designed to capture and convert sunlight into cost-saving electricity without compromising aesthetics. They incorporate UNI-SOLAR thinfilm flexible solar cells shaped into traditional clay tile shapes. The tiles are rated at 1 KW per 200 sq. feet. Most houses would have the space to fully power themselves using these solar tiles.

2. Solar-thermal power plants: A typical plant is an array of flat, conventional mirrors that concentrate solar radiation on a central heat exchanger. A heated fluid  drives a turbine and electric generator. Such plants can be scaled to serve anything from single family or farm to an entire citiy (and anything in between). Some designs are so simple they could have been constructed with 19th century technology. They can easily be built with the current level of engineering and construction expertise available in any city today.

Solar Towers PS20 and PS10, Spain

Large solar-thermal plants are commonly known as “solar power towers“. The land use impact of power towers is less than that of nuclear plants if the impacts of nuclear fuel mining, processing, disposal, and cooling are considered. Solar power towers need no cooling water and can be built on dry, non-arable lands. They need no complicated safety systems or backup and/or off-site electricity to prevent them from melting down or exploding. They represent no environmental or public health risk and have less environmental impact than a parking lot of comparable area.

Compared with the life cycle cost of a nuclear plant and its fuel, a solar power tower is far cheaper to build and to operate. Such plants can go from blueprints to operation in a single year. The deployment of this technology is limited only by capital availability and the active obstruction of the legacy energy industry.

It bears repeating that plants like these could have been built 150 years ago. This is concrete, glass, steel, plumbing, steam engines, and electric generators. All these components and materials were available, and all the engineering knowledge was widespread in 1860 or even earlier. Since the factory machines and transportation vehicles were built to use oil, coal, and gas populations were forced to purchase those fuels from the small number of robber-barons who accumulated monopolistic control of the supplies. Thus the worlds population was held hostage to energy czars for at least seven generations, while enough sunlight to power human civilization was wasted. This was not a result of “human error”.

List of solar thermal power stations

Myth: Photovoltaic systems require expensive parts, professional installation, and high-tech manufacturing facilities.

Fact: There is a wide variety of do-it-yourself options for residential photovoltaics. For example:

However, assembling solar panels or systems from purchased materials may not be the only option. I’m also interested in basic manufacturing technologies for photovoltaic film, ink, paint, etc. that can be adapted to a small-scale, low-capital, low-tech cottage industry.

I know enough about photovoltaic technologies (amorphous thin films, bio-dyes, polymer inks, etc) that I feel pretty certain a low-tech, low volume manufacturing process is possible. Such a process could allow small-scale entrepreneurs to make photovoltaic materials and devices in the garage. It doesn’t have to be high-efficiency. It just needs to be cheap and safe to make.

If it exists, it could be a game-changer. It might even be the most disruptive technology yet because of the number of people who could establish low capital, low tech local cottage industries around it.

Miscellaneous facts:

Wind is also an indirect form of solar energy, and there are numerous ways to capture the wind’s energy to supplement direct sunlight.

The combination of these solar power technologies are sufficient to meet nearly all terrestrial power needs. Both are safer, simpler, and cheaper than other renewable sources of energy such as large hydro dams, ocean waves, and geothermal.

Going off the grid

A combination of these solar power technologies are sufficient to meet nearly all terrestrial power needs. All the solar options are safer, simpler, and cheaper than other renewable sources of energy such as large hydro dams, ocean waves, and geothermal.

For small residential and farm applications that want to have complete energy autonomy it is often useful to combine solar technologies with complementary systems like small windmills and micro-hydro systems. Simple solar water heating systems may have a role as well.

Another indirect solar energy technology, cellulosic ethanol in various blends with conventional fuels, is appropriate for the installed base of gasoline-burning vehicles until the existing fleet can be retired.

The legacy energy industry spends a fortune on public propaganda and scientific disinformation and on political and academic bribes to retard and control the impact of these solar technologies. The continued promotion of, use of, and investment in nuclear and fossil-based power constitutes a crime against humanity and against all life on earth.

Poor Richard

 

Addendum:

A Brief History of Solar Panels

Inventors have been advancing solar technology for more than a century and a half, and improvements in efficiency and aesthetics keep on coming

 

Heliostat improvements for solar power towers
h/t Abhijit Anand Prabhudan

Madrone and Griffith realized that they could cut down on the heft required of the heliostats by using a huge number of small mirrors to replace what would normally be a smaller number of big ones. Small mirrors hug the ground and thus carry smaller wind loads. And small, light-duty heliostats could be built from plastic, following an approach that’s similar to the way certain flowering plants track the sun’s daily movements. “Originally, we were origami inspired, and now we’re bio inspired,” says Madrone.

Her latest prototype aims a mirror by varying the pressures within pneumatically inflated plastic chambers, which can be mass-produced with the same tooling used to make plastic bottles. “If we keep using heliostats that have been around for half a century, there’s no way the price is going to go down,” says Madrone. “If we don’t start taking advantage of new technologies, we’re just going to lose the solar game.”

Q. What is nuclear power?

Internationally recognized symbol.

Image via Wikipedia

A. “Nuclear power is a crazy way to boil water.” —Helen Caldicott

“Nuclear Power is not the Answer” –Helen Caldicott

Instead of focusing on nuclear safety, Amory Lovins looks at the economic problems of nuclear technology and its negative impact on alternative strategies to mitigate climate change:

Amory Lovins: Congressional testimony on energy solutions

Nuff said.

Poor Richard

Related Articles


Radiation at Fukushima Daiichi

NY Times May 17, 2011

Forecast for Plume’s Path

NY Times May 16, 2011

click for real-time update of radiation readings

General Utility 2.0

Update: General Utility 3.0 (10/4/2022)

I brainstorm a taxonomy of wellbeing metrics at the end of my essay “General Utility 2.0”. A complimentary taxonomy of suffering metrics should be possible.

In “General Utility 2.0” I imagine a “graphic equalizer” metaphor whereby we can adjust the weights of wellbeing parameters to achieve the most pleasing combinations. But since writing that essay I’ve come around to a preference for minimizing the suffering parameters instead of optimizing wellbeing parameters. Wellbeing is something we should leave entirely open to requisite variety, which weighs against any single point of convergence towards which we might optimize. Minimizing pointless suffering also has a similar issue, in that suffering is different things to different people, but on that side there is less harm in focusing more on what people have in common than on ways they differ. If we can minimize the most common parameters of suffering we shall have done great good without imposing any artificial conformity on the positive wellbeing side. This is analogous to a negative framing of the golden rule–do not do unto others that which you would not have done to you. There is some evidence that such was the earlier formulation of the maxim.

A Late Period (c. 664–323 BCE) papyrus contains an early negative affirmation of the Golden Rule: “That which you hate to be done to you, do not do to another.”[12]

One should never do something to others that one would regard as an injury to one’s own self. In brief, this is dharma. Anything else is succumbing to desire.

— Mahābhārata 13.114.8 (Critical edition)
The Mahābhārata is usually dated to the period between 400 BCE and 400 CE.[13][14]

https://en.m.wikipedia.org/wiki/Golden_Rule

‐—-‐———–

Three models of change in scientific theories,...

Image via Wikipedia

General Utility 2.0, Towards a science of happiness and well-being

A defect in some forms of consequentialism is an externalization of subjective, qualitative states such as happiness or contentment from a tally of consequences. Incorporating subjective values, states, or qualia as consequences of an action or circumstance is one of the aims of “General Utility 2.0” In addition, taking a cue from Sam Harris’ Moral Landscape, I attempt to begin a catalog of objective correlates to subjective states as a quantitative framework (sometimes referred to as ahedonic calculus“) for a science of happiness and wellbeing. This doesn’t exclude subjective self reports by any means, but tries to supplement them with metrics that might capture unconscious aspects of wellbeing or compare self assessments of things such as health status, for example, with more objective measures.

Stephen J Gould seems to have spoken for many when he proposed that science and religion, or the domains of “is” and “ought”, are “non-overlapping magisteria,” and opined that “science and religion do not glower at each other…but interdigitate in patterns of complex fingering, and at every fractal scale of self-similarity.”

But whether or not the “magisteria” of science and religion overlap is the question, (a version of the demarcation problem) not the answer. In my opinion they do overlap in the following way: religion, philosophy and science widely overlap in the domain of 1) asking questions about the world, and 2) interpreting evidence—although each may specialize in the types of questions it chooses to ask and the kinds of evidence it chooses to interpret.

The problem, and the glowering, arises when it comes to 3) the scientific method and the CRAFT of gathering and validating evidence, regardless of whether the evidence concerns atoms, evolution, or out-of-body experiences. One person’s justified belief is another person’s heresy.

More to the point of this essay, one person’s “ought” is another person’s “ought not”. Curiously, the difference between an ought and an ought-not often comes down to what “is”. My choice to give a panhandler some money or not may depend on whether we are standing in front of a cafe or a liquor store. The more complete our information about people and situations the better we can decide about the “right” thing to do, regardless of our moral framework.

The hypothesis on which this essay is based is this:  the more we know about the domain of what is, the smaller the gulf between “is and aught” becomes. I will approach the domain of what is, as it concerns happiness and well-being, through the lens and methods of science, without intending to question or threaten any beliefs that science may be silent about.

Reduction of a duck

Hopefully this will not be dismissed as scientism, positivism, reductionism, or materialism. These terms have various negative connotations but what they may all have in common is a criticism of scientific authority when it violates the parsimony principle. In other words, legitimate science crosses the line into scientism when it takes authoritative positions without sufficient empirical evidence to justify scientific conclusions. A lack of evidence for a proposition (the existence of a God, for example) is not proof of the contrary (negative) proposition. The legitimate scientific position where evidence is lacking is to abstain from drawing conclusions, period.

On the other hand, it is perfectly proper for science to evaluate the conclusions of scientists and non-scientists alike, the methods by which such conclusions are reached, and the evidence on which they are based; and to judge their scientific merit. There are many truth claims popular in modern times that are demonstrably false. I think it is important for people in all walks of life to have the opportunity to learn what science may say about the truth claims we are bombarded with all the time by our authority figures and peers. Even in those cases where science must remain parsimoniously silent, that silence may speak volumes.

Sam Harris, The Moral Landscape

http://www.youtube.com/watch?v=UrA-8rTxXf0

Are we satisfied with people doing good for the wrong reasons or doing wrong for good reasons? Doing something for a wrong reason increases the risk of bad side-effects and unintended consequences, including but not limited to the consequence of reinforcing the fallacies behind the original motivation. The branch of moral philosophy known as consequentialism emphasizes the results or consequences of an action or rule over the importance of intentions or motives. What I like about consequentialism in general is a concern about unintended consequences because, after all, “The road to hell is paved with good intentions.” It also accords with the aphorism that “By their fruits [i.e. results—not words, reputations, intentions, etc.] shall ye know them.” Results alone may not be sufficient to justify actions, but neither are intentions. Of the two, results may well be the more germane; and they are certainly the more easily quantifiable.

Wikipedia says:

Consequentialism refers to those moral theories which hold that the consequences of one’s conduct are the true basis for any judgment about the morality of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission) is one that will produce a good outcome, or consequence. This view is often expressed as the aphorism “The ends justify the means”.

Consequentialism is usually understood as distinct from deontology, in that deontology derives the rightness or wrongness of one’s conduct from the character of the behavior itself rather than the outcomes of the conduct. It is also distinguished from virtue ethics, which focuses on the character of the agent rather than on the nature or consequences of the act (or omission) itself.

Obviously the glaring problem with that definition is “The ends justify the means.” We have a strong, intuitive, negative reaction to that. But consequentialism makes no such absolute, categorical dictum (in distinction to other schools of thought which might hold the contrary dictum that ends can never justify means). Consequentialism holds that both means and ends have consequences and that a valid utility calculation would include both. Would you tell a lie to save an innocent life? Would you kill someone to save your own life? Would you kill someone to save thousands of innocent lives? If so, you may be a consequentialist. Would you cheat, lie, and steal to win a political election? If so you may be a scumbag, but not necessarily a consequentialist.

The saying “the ends justify the means” is often used to justify means which are actually ends in themselves or which serve ends that are not explicitly stated by those who employ them. Or the stated ends may fail to include various “externalities” or side-effects, by-products, or other consequences which were unintended and/or unjustified.

I can’t prevent the concept of consequentialism from being applied euphemistically and disingenuously to provide cover for special interests and antisocial behaviors, but that is in direct contradiction to the aim of empirically and transparently accounting for all consequences—including side-effects and so-called externalities.

Consequentialism does not hold that results/ends matter more than methods/means but that consequences matter more than intentions. Methods and means are susceptible to empirical evaluation as to their consequences (both direct and incidental) and fitness for any given purpose, but intentions are not.

As goodgraydrab put it in a discussion at The Reason Project Forum, “empirical evidence can significantly alter the notion of support for “justification” and “means,” while at the same time examining the validity and motivation for the “ends,” over unfounded supernatural biblical belief and political greed.”

We often fail to define and justify our “ends” in a full and explicit way. One goal often contradicts another goal. That is why we usually say the ends don’t justify the means. What this really says is that certain implicit goals, such as civil society (the rule of law), or the value of personal virtue, are considered axiomatic and must not be contradicted by the means used to achieve other goals, such as accumulation of personal wealth. The means for achieving one goal may violate or defeat achieving another, perhaps even more important, goal.  Means are properly justified (or judged) by all their consequences, intended or unintended; or to put it another way, by their effectiveness and by all their side effects. In consequentialism, no “externalities” can be sanctioned. To recognize one set of consequences and ignore others would be outright hypocrisy or subterfuge, not consequentialism.

The harder philosophical issue may be judging the merit of the desired “ends” or goals.

Goals can be classed as individual or collective. There is a natural tension between these two categories that can be difficult to resolve even by concepts of enlightened self interest and maximum utility. Human beings would not be well served by the deterministic rules of ant society. A model of maximum utility that includes human beings requires a certain amount of capability, freedom, and dignity, as well as some amount of inequality or disequilibrium. But when, where, why, and how much?

That is where a generic version of consequentialism, utilitarianism or utility theory that I call “general utility” comes in.

General Utility vs The Noble Savage and a Morality based on “Natural Law”

Jean-Jacques Rousseau…argued that civilization, with its envy and self-consciousness, has made men bad. In his Discourse on the Origins of Inequality Among Men (1754), Rousseau maintained that man in a State of Nature had been a solitary, ape-like creature, who was not méchant (bad), as Hobbes had maintained, but (like some other animals) had an “innate repugnance to see others of his kind suffer” (and this natural sympathy constituted the Natural Man’s one-and-only natural virtue). It was Rousseau’s fellow philosophe, Voltaire, objecting to Rousseau’s egalitarianism, who charged him with primitivism [see anarcho-primitivism] and accused him of wanting to make people go back and walk on all fours. Because Rousseau was the preferred philosopher of the radical Jacobins of the French Revolution, he, above all, became tarred with the accusation of promoting the notion of the “noble savage”, especially during the polemics about Imperialism and scientific racism in the last half of the 19th century. Yet the phrase “noble savage” does not occur in any of Rousseau’s writings. In fact, Rousseau arguably shared Hobbes’ pessimistic view of humankind, except that as Rousseau saw it, Hobbes had made the error of assigning it to too early a stage in human evolution.”  (Wikipedia: Erroneous_Identification_of_Rousseau_with_the_noble_savage)

IMO Rousseau actually WAS  enamored of the “noble savage” fantasy though he didn’t use the phrase. However, this controversial and highly speculative side-track to his thought is entirely irrelevant. What matters is the present social contract, not vague speculations about prehistory.

“For Rousseau the remedy was not in going back to the primitive but in reorganizing society on the basis of a properly drawn up social compact, so as to “draw from the very evil from which we suffer [i.e., civilization and progress] the remedy which shall cure it.”  (Wikipedia: Erroneous_Identification_of_Rousseau_with_the_noble_savage)

Civic/legal/moral equality is a relatively modern idea and derives little from evolution or ancient cultural traditions, with the possible exception of the old canard that we are all equal “in the sight of God.” We need to replace both the legacy of evolution and the “sight of God” with the insight of a humanity which bases its ethics on reason rather than appeals to authority, history, nature, or divine revelations.

All rights (including human rights, civil rights, and property rights) are the products of contract, the most fundamental of which is the “social contract.” Regulation and enforcement of contracts (and thus rights) is a matter of jurisprudence and jurisdiction. Law determines what rights may be inalienable in a given jurisdiction, just as law determines what contracts are legal or illegal. This is all a matter of LAW. Theories based on or derived from “natural law,” “natural rights,” or even on economics are irrelevant to the question of rights and property except to the extent that such ideas have been (and still are) toxic to the evolution of jurisprudence. Rousseau, Kant, and many, many others have tried to settle the rights question on the basis of some “natural law” which remains a speculative and unsettled theory.

IMO “Natural law” is simply a modern facade for divine law. It is a fiction. I do not support morality or ethical systems based on religion or fairy tales. I prefer jurisprudence based on empirical (quantifiable and verifiable) equity, and preferably in a framework of the greatest good for the greatest number (general utility). That is the only proper, objective (non-fictional) basis for morality, ethics, law, or enlightened self-interest. I only wish this were considered self-evident by more people. Instead we constantly debate rights on the basis of philosophy, religion, ideology, or economic theory, none of which provide a sound foundation for rational human rights, civil rights, or property rights.

But where do we get the right foundation for a modern and rational social contract? Not from any form of conventional deontological or consequentialist morality. Deontology and consequentialism are usually contrasted with one another:

“Consequentialism, we are told, judges the rightness or wrongness of an action by the desirability of the outcome it produces; a deontological system, on the other hand, judges actions by whether or not they adhere to certain rules (e.g. ‘don’t censor newspapers’).” (“All Ethical Systems are Both Deontological and Consequentialist” by Noahpinion)

Deontology is really just after-the-fact consequentialism. If not prior experience (and thus appreciation of consequences), what would deontological rules and duties be based on other than some form of moral superstition or conjecture? One answer is that deontology is based on contract theory.

“Contract theory is the whole drama of deontology(intent) and ultilitarian(outcome) mergers.  Agential problems are the friction between the crude utlilitiarian measures/systems/incentives and the deontological issues of contracts, and when such problems are insurmountable at any given present, we have tension that can lead to revolution. ” ~Robert Ryan

Yes, the rules and duties of any deontology worth its salt are not merely unilateral assertions–they are social contracts. But the utility of a contract is not merely in its intent. The utility must also be judged by results. Both the intents and the results matter, in the same way that both the ends and the means matter; and both are embraced in the concept of general utility.

Bateson quote balance

General utility (my term for a generic form of consequentialism) is not arbitrary, authoritarian, philosophical, religious, ideological, historical, anthropological, or tradition-bound. Nor is it cruel or heartless. (What kind of madman would calculate well-being or “the greatest good” without taking subjective needs into account?) By general utility I mean much more (and less) than narrow market-based utility functions that are full of externalities.

The so-called “utility function” in economics and the many varieties of utilitarianism and consequentialism have their critics and their historical baggage. In philosophy, economics, and social science utility functions  have been formulated in overly vague, reductive, or simplistic ways often rife with primitive, pre-scientific assumptions and externalities.  Utilitarianism and consequentialism have countless variants each with its controversies. I assume a priori that any economic or philosophical school has historical baggage and needs to be reformulated to conform with a modern empirical framework. Henceforth I will refer to that scientific framework as General Utility 2.0 (GU2). I call it “general” utility to distinguish it from prior species of utility theory and utilitarianism which I characterize as  narrow or “special” versions of utility.

In theory, the GU2 framework is a multi-dimensional matrix of all variables that impact the well-being and flourishing of human life and everything on which it depends, including the biosphere.

Sam Harris is a neuroscientist, but I don’t think he is saying that well-being is only a matter of mental states. Things like organic health, economic well-being, and ecological fitness are important, too.

I hope that an empirical approach to ethics can breathe new life and scope into consequentialism and the utility function. In my opinion, “what is” and “what ought” are on a collision course, and Sam Harris may be one of the early pioneers of that convergence.

Objections to utility

The most compelling objection to consequentialism is that consequences are unpredictable or uncontrollable despite our best intentions and our best-laid plans. This is the problem of chaos, complexity, and uncertainty. But as a critique of consequentialism this is simply a special case of the repudiation of science in general. By a similar process of reasoning we would conclude that science is impractical and that reality is unknowable because it is not perfectly or completely knowable. I hope that pointing out this fallacy dispenses with the “unknowability of consequences” objection.

Perhaps the most common objections to consequentialism are those that simply assert a bias for other standards of morality that are thought to be morally superior. I argue that non-empirical standards, whether based on authority, history, or any other unfalsifiable theory, are no longer worthy of serious consideration in today’s world.

I already discussed the objection to “ends justifying means” but here is a particular example I recently came across:

“Utilitarianism cannot protect the rights of minorities if the goal is the greatest good for the greatest number.  Americans in the eighteenth century could justify slavery on the basis that it provided a good consequence for a majority of Americans.  Certainly the majority benefited from cheap slave labor even though the lives of African slaves were much worse.”

Many objections to utility have to do with the difficulties in weighting, aggregating, and computing individual vs collective well-being.  Some methods can produce “repugnant” results for individuals and minorities, and even for the whole population in some instances. These are mostly methodological issues.

The biosphere and well-being of future generations must also be taken into account in GU2, and the means for achieving maximum utility must conform with certain axiomatic constraints such as justice and allowance for prior conditions. For example, if maximum utility prescribes a population smaller than presently exists, the population goal must be achieved via natural attrition rather than mass extermination.

Further, the goal of GU2 is not to force people autocratically into conformity with some computed state of maximum utility. The idea is make information about the consequences of possible choices available in the expectation that such information will affect the choosers. The GU2 model would allow the results of changes to any variable to be distributed across all knowledge domains and the consequences estimated.

As goodgraydrab put it so well, “empirical evidence can significantly alter the notion of support for “justification” and “means,” while at the same time examining the validity and motivation for the “ends,” over unfounded supernatural biblical belief and political greed.”

Since in the end people still have to decide how to weigh variables and how apply GU2 information, the results of maximizing utility should not violently contradict generic, intuitive attitudes towards well-being. The hope is more that such empirical knowledge would SHAPE such attitudes for the better.

The following is little more than a brainstorming effort, but I think its helpful to have some concrete iteration of an idea to work from.

General Utility 2.0 Framework

A CRUDE TAXONOMY OF WELL-BEING/FLOURISHING/QUALITY OF LIFE

I. Identity

  1. personal profile
  2. demographic info
  3. physical metrics and descriptors
  4. biographical info

II. Happiness (mental/emotional state)

Note: possibility of real-time monitoring of some factors

  1. vital signs
  2. galvanic skin resistance
  3. pupil dilation
  4. brain scans (qEEG, fMRI)
  5. hormone levels
  6. homeostasis
  7. presence/absence of stress or other happiness inhibitors
  8. subjective reports
  9. etc.

III. Health & longevity (many dimensions)

IV. Safety/security (ditto)

V. Freedom/constraint/capability (ditto)

VI. Information/communication

A. Education

  1. Formal education
  2. Self education

B. Self-knowledge

  1. implicit associations and biases
  2. conscious values/beliefs
  3. strengths and weaknesses
  4. habits
  5. effective/ineffective reinforcement history
  6. etc.

D. Beliefs and opinions

E. Cognitive and communication skills

  1. IQ
  2. verbal
  3. written
  4. logic
  5. cognitive deficits
  6. internet
  7. mobile
  8. etc

VII. Social matrix

  1. Status (gender, age, wealth, power, rank, position, fame, celebrity, etc.)
  2. Family
  3. Friends
  4. Community
  5. Employment (job code, job satisfaction, working conditions, culture, co-worker relations, etc.)
  6. Memberships and affiliations
  7. On-line social networks
  8. Other support networks

VIII. Skills & abilities (academic, technical, mechanical, professional, athletic, parenting, housekeeping, etc.)

IX. Standard-of-living factors

  1. market basket
  2. assets & liabilities
  3. disposable income
  4. etc.

X. Other quality of life factors

  1. creative activities
  2. recreation
  3. exposure to nature
  4. etc.

XI. Contributions and Costs to the flourishing of others (including ecosystem impacts: carbon footprint, resource footprint, etc.)

It is important to note that various instruments already exist to measure nearly all the parameters in the above table and thus create well-being “profiles” of individuals and groups.

The next level of developing the GU2 model would be to correlate every species of data in the profile so that a change in one variable would be reflected in any others where a relationship was known. So the GU2 framework is a model of both data and relational algorithms.

The GU2 framework might be thought of as analogous to the control board in a recording studio. All the individual parameters of the sounds on multiple “tracks” can be adjusted and combined in an infinite number of ways but somehow one particular set of levels gets chosen as the most pleasing combination. The old-school theories of philosophy, economics, politics, and social welfare might be  analogous to the generic rock/pop/jazz settings on a cheap acoustic equalizer.  GU2 encourages a much more granular, eclectic, and empirical approach to altering parameters and measuring results, either as simulations or as interventions in the real world.

What is the goal?

Everyone has multiple goals with some degree of overlap and conflict. The best way I know to express the overall goal of General Utility 2.0 is this: to enhance the process of evolution. What I mean by evolution is the on-going emergence of new and increased capacities and capabilities in the biosphere and its parts, including but not limited to ourselves.

What happens if we maximize the biomass of human neurons on the planet and minimize the mass of human fat cells? This is a far-fetched question even in the context of GU2. But the current impossibility of simulating such scenarios is not a bad reason for investing in GU2.

Utility is actually implicit in everything we do. The goal of GU2 is to make it explicit. This will seem like a bad idea to some. Many may feel, not without some justification, that such knowledge is dangerous. The funny thing about knowledge, though, is that a little bit is more dangerous than a lot.

The brain is a powerful utility-computing device, but it is an analog device with many eccentric, ad hoc methods for doing its job. An increasing number of brains are becoming aware of this limitation and they are developing science and technology to enhance the power and quality of utility computation. These rational cognitive prosthetics, enhancements, and quality controls are vital because the biological brain is not able to evolve rapidly enough to deal with changing environmental conditions.

Some rational utility computations will no doubt conflict with eccentric brain-based computations. Many of our human eccentricities may be relatively harmless. Some may be essential to who we are. Certainly some are beautiful to us and are deeply cherished. Unfortunately, some are also responsible for a great deal of human suffering and environmental damage. Sorting it all out will not be easy or painless but that is the goal of GU2.

I would also say that a goal of GU2 is for humanity to achieve greater moral-ethical maturity–i.e, to put away childish, pre-scientific notions of morality and to grow up.

Poor Richard

Related Articles and Resources

Health Index: A Hypothetical Index to Assess the Health of a Society w/ Daniel Schmachtenberger, Published on Jun 2, 2021

Read the rest of this entry »

theory of stuff

plasma lamp

Image via Wikipedia

Matter is said to have various forms–solid, fluid, gas, plasma, etc.

Energy is also said to have various forms–kinetic, potential, thermal, gravitational, sound, elastic, electromagnetic, etc.

Energy and matter are transformable from one to the other, as when wood burns or nuclear weapons explode. The amount of energy (e) in a piece of matter is equal to the mass (m) of the matter times the speed of light squared (c2),  giving the famous formula e=mc2.

According to the theory of stuff, energy and matter are two different forms of stuff. Stuff may have other forms such as dark matter or dark energy (technically called strange stuff), but we aren’t really sure yet. The important thing is that energy and matter are two forms of the same stuff. What stuff is, in and of itself, is not known. We may know in the future, but we don’t know now. All we know now is that matter and energy are two forms of stuff which can be converted back and  forth. If any of our matter/energy is changing into other kinds of stuff, other kinds of stuff may be changing into our familiar stuff in exchange, without us ever suspecting a thing.

As far as we can tell, the total quantity of matter/energy stuff  in our universe is constant, but this may be a peculiarity of our perspective– our spatially, temporally, and constitutionally limited and local observation. Everything is “wiggling and giggling” so much it’s hard to get a clear fix on things. For all we know, our whole universe is blinking in and out of existence and alternating with any number of other universes. This is the alternating multiverse (AM), as opposed to the direct multiverse (DM), theory. As in the case of alternating and direct current (AC and DC), both AM and DM may coexist[1].

There are many forms of stuff and many, many ways that stuff may interact with other stuff. It is unlikely we will ever know the half of it. It is perfectly reasonable for Shakespeare to say “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy .” (Hamlet Act 1. Scene V). However, all forms of stuff, all properties and behaviors of stuff, and all interactions of stuff with other stuff —are phenomena of stuff.

And by convention/definition, all stuff is is what we call “natural”. Natural simply means “made of real stuff” as opposed to imaginary stuff like unicorn poop.

Getting to the point…

The only point I have here should go without saying: anything that is or has ever been attributed to supernatural agencies or mechanisms, if there is any validity to the experience or observation in question,  is most likely the work of some form of natural stuff.

Just as there is no fundamental dichotomy between matter and energy (only a diversity in the observed form and behavior of stuff), there is probably no real dichotomy between matter and what we call spirit. If spirit exists, it is probably made of stuff.

The real dichotomy is the one between justified and unjustified belief.

Whatever kind of stuff or behavior of stuff is in question, the difference between anecdote and scientifically-established, probable fact remains. All the distinctions between well-controlled experiments and one-off observations, between high and low probability, between justified belief and imagination, etc.– all those distinctions remain in full force and effect.

If that’s not what you are hearing in church lately, maybe you should switch to the Church of Reality.

The Church of Reality

The Church of Reality is about making a religious commitment to the pursuit of the understanding of reality as it really is.

This reality is the sum of everything that actually exists. Our definition of reality includes what some people call “other realities” that actually are real with the exclusion of imaginary realities and religious fiction. We care about what is really real, not what we want to believe is real.

Maybe I’ll see you in church….

Poor Richard

________________________________________

Footnotes:

1. My whimsical description of an alternating wave function producing an Alternating Multiverse is remarkably similar on some points to the “Many Worlds” hypothesis of Hugh Everett:

Wikipedia: Hugh Everett III (November 11, 1930 – July 19, 1982) was an American physicist who first proposed the many-worlds interpretation (MWI) of quantum physics, which he called his “relative state” formulation. He switched thesis advisors to John Wheeler some time in 1955, wrote a couple of short papers on quantum theory and completed his long paper, Wave Mechanics Without Probability in April 1956[2] later retitled as The Theory of the Universal Wave Function, and eventually defended his thesis after some delay in the spring of 1957. A short article, which was a compromise between Everett and Wheeler about how to present the concept and almost identical to the final version of his thesis, appeared in Reviews of Modern Physics Vol 29 #3 454-462, (July 1957), accompanied by a supportive review by Wheeler. The physics world took little note.

src=”http://www.youtube.com/v/X8Aurpr68uE?fs=1&hl=en_US&#8221; type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”480″ height=”385″>