Go to home page

This article appears in the April 17, 2020 issue of Executive Intelligence Review.

September 8, 1987

NEVER BEFORE PUBLISHED

The ‘Strong Hypothesis’ of Biophysics

[Print version of this article]

The following is prompted by reading of the manuscript of the eminent Dr. Sydney J. Webb, “A Possible New Approach to Force Fields and Biophysics Through a Unification of Modern and Classical Physics.” Despite a strong criticism, whose nature will soon become obvious, I believe it urgent to cause the manuscript to be published soon, with very little editing of the literary form for such included improvements as a paragraphing more convenient to the reader, some footnotes needed for a broader readership among scientists turning their attention now to this current of biophysics, and so forth.

Although the subject, optical biophysics, is not within the province of ICLC[fn_1] membership generally, there are three reasons that the membership as a whole must have a certain sort of competence in key aspects of that subject-matter. The urgency of AIDS research is one such reason; the emerging strategic role of electronic agents of biological warfare, is another. The “political heat” broadly to be experienced in connection with these two applications, will be greater than we have experienced since our February 1982 introduction of what became known later as the “Strategic Defense Initiative (SDI).”

My criticisms of Dr. Webb’s choice of physics defines the specific kind of competence which must become established within our membership as a whole. This represents not merely a criticism, but, rather, the definition of a vantage-point from which our membership can develop a competent grasp of those aspects of the subject-matter of optical biophysics which bear directly on policy decisions to be considered by governments and other agencies.

It is our included duty to prompt the widest circulation of materials representing the best knowledge supplied by leading workers in the field of optical biophysics generally, and “non-linear,” especially “non-thermal” effects of electromagnetic radiation by and upon mitotic and subsumed processes. This must include background materials, such as the roots of biophysics in the relevant deliberations of Parmenides, Plato, and Archimedes; and the emergence of modern optical biophysics from the pioneering work of Nicolaus of Cusa, Pacioli, Leonardo da Vinci, Dürer, Kepler, Fermat, Pascal, Leibniz, et al., through Pasteur, Vernadsky, Gurwitsch, et al. This must include the best selections of work of researchers over the recent forty years, among whom Webb has special importance for anyone attempting to master the field today.

Dr. Webb’s manuscript in view has a special place in that reporting. It summarizes much valuable experimental inquiry from the standpoint in physics which he adopts for this manuscript. Although I disagree with the elementary features of the physics employed for this purpose, for reason akin to my earlier criticisms of [Nicolas] Rashevsky’s method, Dr. Webb has thus situated the material itself in the integrated way most advantageous for deliberation upon the choice of physics. Although I would disagree with some of the formulations, for reasons to become clear, his formulations are not to be discarded on that account, but rather restated by the simple expedient of translating them into the proper physics language. Hence, those formulations have an historic scientific importance in the form he supplies.

In other words, Dr. Webb has arranged his evidence in the quasi-Newtonian form suitable for describing primary experimental events in terms of the discrete manifold as such. If the manuscript is read in that way, it has durable value. The challenge is to restate the same points in a different physics-language, seeing the discrete manifold as a projection of what is ontologically elementary only in the Gauss-Riemann complex domain.

I think that the membership, reading now what I have to contribute on this matter, will soon recognize much we have already covered in many frames of reference over the past twenty years of study of economic science, and other applications of Riemannian physics. From this vantage-point, it should become obvious, rather quickly, where our specific, delimited competence lies in this matter and the policy questions of application involved.

1. The Meaning of ‘Strong Hypothesis’

1.10 Deductive Schemas

All deductively consistent systems of hypotheses and theorems in a formal logic are merely giant tautologies, subsumed everywhere, within each particular system, by what Bertrand Russell, et al. referenced as an “hereditary principle.” Each system as whole is thus describable as forming what Professor Garret Birkhof et al. have described as a “lattice.” All of these features of any such deductive system of hypotheses and theorems are aptly illustrated by the deductive system of the Ptolemaic “false Euclid,” Euclid’s Elements.

The system begins with an array of axioms and postulates, to the effect that, if we consider all possible deductive systems, within any one system the distinction between “axioms” and “postulates” has no functional significance. The only “axioms” within any choice of deductive system, are those postulates which are implicitly common to all possible deductive systems. Hence, in practice, I use the term “axiom” to signify those postulational assumptions common to all deductive systems susceptible of logical consistency; I use “postulate” to signify arbitrary assumptions whose inclusion sets one or more such “lattice systems” apart within the domain of all possible forms of consistent deductive schemas.

EIRNS/Stuart Lewis
Lyndon H. LaRouche, Jr., in 1985.

During the past 2,000 years, very little has been added to our knowledge of the “properties” of deductive systems which was not already known to Aristotle and those among Aristotle’s epigones whose combined efforts constitute the Ptolemaic Euclid’s Elements. More precisely said, there is nothing new known about the properties of such systems which can not be adduced through criticism of Aristotle’s dialectic from the standpoint of Plato’s Socratic dialectic.

To build a deductive lattice, begin with the array of postulates. Make various combinations of the original postulates, to assert something deductively implicit in that selection, but not in contradiction to any of the postulates not immediately considered. Repeat this, until all possible combinations of the original array of postulates have been treated in this manner. This supplies an initial layer of hypotheses (or, theorems).

Next repeat this, treating the initial array of hypotheses as building-blocks for members of a new layer of hypotheses, each of which is without contradiction to any among the original array of postulates. Exhaust all possible combinations, so. This is the second layer of hypotheses.

Repeat this indefinitely, adding successively new layers of hypotheses. So, the lattice is constructed deductively.

Thus, the most obvious “property” of each and every deductive system, or “lattice,” is that no hypothesis exists in the system which is not implicit in the statement of the original array of postulates. This “property” is the hereditary principle.

1.11 Common Axioms of Deductive Schemas

It is often assumed falsely, that matters of logic can be separated from the subject-matters to which a system of formal logic might be applied. A commonly encountered expression of this mistaken belief is the assumption that there exists a body of pure mathematics, which can be distinguished from any one kind of mathematical physics, at least to the degree that experimental physics could not refute a formal principle of pure mathematics.

Any formal system of rational thought, each sometimes identified as specific choice of method, is readily shown to be permeated, hereditarily, by elementary ontological assumptions, to the effect that any choice of method is also a choice of physics. To restate this crucial point of our entire argument here, any choice of method, insofar as that it is distinct choice of method, is also a distinct kind of assumption respecting the nature of “matter,” a different notion of “matter” than that embedded in employment of a different choice of method.

The axiomatic assumption hereditarily common to all deductive method, is the assumption of discreteness. This assumption is commonly expressed in the form of statements to the effect that the existence of time and space is linear, with no possible quality of discreteness associated with space as such or time as such. “Matter,” in contrast to such notions of space and time, has the essential, assumed characteristic of discreteness.

In other words, in the definition of a “point,” in each and every deductive system, the “point” in space or time has the attributed quality (property) of being infinitely divisible, without limit; whereas substance, or matter, can not be subdivided without limit. Matter can exist, according to such species of axiomatic assumptions, only to the degree that there is a limit to our assumed ability to subdivide it into smaller portions. Matter can be reduced, it is assumed, only to some definite, smallest degree, which latter is assumed to be the elementary state of existence of matter.

In all deductive systems, all of the possible properties of matter, or substance, are derived deductively from the bare, axiomatic assumption of the self-evident equivalent of matter to discreteness. If the proponents of the method do not themselves argue for the existence of such a connection, it can be shown, nonetheless, that those proponents have unwittingly adopted such an assumption as a hereditary feature of all applications of that method.

Thus, in all deductive method, percussive action and action at a distance are the only forms in which events can occur within abstract, linear space, and abstract, linear time. These two properties of discreteness are expressed as a single property, in the deductive method’s notion of force.

For this reason, all deductive method is intrinsically linear, and false to reality on that account.

1.12 Deduction’s Limits

This interdependence between axiomatic notions of discreteness and linearity shows most clearly in the easily demonstrated reasons that no deductive method can employ the terms creation (the verb, to create) or life (the verb, to live), except as empty, unintelligible notions. In the proper alternative to deductive method, constructive-geometric method, we can supply an intelligible representation of both terms, and can show that the two terms are properly different ways of saying the same thing.

In deduction, creation signifies that something exists at moment B, the which did not exist at an immediately preceding moment, A. “Creation” thus signifies the occurrence of such a moment B. No representation of a process of creation, bridging the two moments, is possible; the term, “creation” is used in all deductive method to signify that which no logician knows, for which he can supply no intelligible representation. Thus, in the mouth of the logician, the verb to create is a meaningless one.

In the same way, and for the same reason, life is an empty notion in the mouth of the logician. In other words, life as a concept does not exist within the scope of molecular biology. On this point, the relevance of these issues of method to optical biophysics begins to be made clearer.

Dr. Sydney Webb is among those biophysicists who have implicitly recognized and emphasized this fact as a biological, experimental fact. The practical problem underscored by the importance of his work, as well as other researchers working in the same vein, is the need to define a method of mathematical-physics representation appropriate to the non-linear, i.e., non-deductive—character of the processes examined.

1.20 ‘Strong Hypothesis’ in Deductive Method

By “strong hypothesis,” we should understand one another to signify emphasis upon the “hereditary properties” of deductive lattices, rather than arguments situated within some locality of a specific choice of such lattice. In other words, each theorem or hypothesis is addressed directly, immediately, in terms of the most fundamental characteristics of the schema as a whole, rather than in the customary manner associated with the use of that schema. Within deductive method, an hypothesis which addresses an hypothesis only directly in terms of the characteristic properties of a specific lattice would already be a “strong hypothesis,” relatively speaking.

For our purposes here, in contrasting the application of any sort of deductive method to a constructive method, it is the axiomatic features of any and all deductive methods, upon which our interest is focussed directly. This represents the “strongest” kind of hypotheses which could be introduced to the examination of any issue of deductive method.

Rather than tracing our arguments through each node in the lattice, back to the underlying axioms and postulates, we take advantage of the “hereditary principle” directly, to focus only upon those limitations which are implicit in each and every hypothesis within a lattice as a whole, because of the implications of the set of axioms and postulates on which the generation of the lattice as a whole is premised. It is those axiomatic features of each theorem which draw our attention directly.

In this case, it is the axiomatics common to all deductive method which draw our attention in that way. I.e., how does Dr. Webb’s use of “classical physics” incur the implications of axiomatic assumptions of discreteness to such effect that a living process can not be directly represented in this way?

2. Constructive Geometry

In the manuscript, Dr. Webb’s approach to approximating the self-replicating features of living processes borrows, at least in effect, from 1930s and later discussions of “Turing machines.” At some points, he employs arguments identical to those shown by topologists to have been central to the “Turing machine” theses.

As we know, such schemas apply to non-living processes; 1950s work on clever topologists’ toys, such as “shake boxes,” illustrates the point. So, it should be clear, from the outset, that the methods of Alan Turing, and similar approaches, are not appropriate for treating the characteristics of living processes.

As should be rather well known, this is familiar terrain for me, from my 1940s-1950s work in refuting “information theory.” Norbert Wiener and his collaborators, for example, worked through the “Turing machine” paradigms, as models implicitly susceptible to Ludwig Boltzmann’s statistical model of entropy/negentropy measurements. For related reasons, the Turing model would appear to provide an intelligible representation within the range of the “classical physics” which Webb references. Nonetheless, for axiomatic reasons referenced already by Johannes Kepler’s treatment of the snowflake, a Turing model lacks all of the essential characteristics of a living process.

View full size
CC/Jitze Couperus
“All living processes are characterized by an harmonic ordering of growth, congruent with the Golden Section.” Shown: Cross-section of a nautilus shell.

2.10 The Limits of Euclidean Space

The fallacies of deductive method are made rigorously clear, most emphatically, by the classic treatments of two central problems of geometry: the impossibility of the quadrature of the circle, and the uniqueness of the platonic solids. The Golden Section (Platonic solids) represents the boundedness of intelligible representation of construction within visible (e.g., “Euclidean”) space. As Luca Pacioli demonstrates, an effective treatment of this uniqueness of the platonic solids is possible only from the standpoint of Nicolaus of Cusa’s representation of the isoperimetric properties of physical space-time: a solution developed by Cusa with reference to Archimedes’ treatment of the attempted quadrature of the circle.

Although it is now clear enough, that the geometry known to Plato et al. was a constructive, or synthetic geometry, rather than a deductive system, it is meaningful to state, that modern constructive geometry begins with Cusa’s De Docta Ignorantia. Cusa’s “Maximum-Minimum” principle, in that location, is not merely an isoperimetric theorem principle; it is the first modern statement of a universal principle of least action in physical space-time: the least perimetric displacement subtending the relatively largest area of volume generated by that action. It is also, more generally, a solution to the classical Parmenides problem, of rendering intelligible the efficient interdependency of microcosm and macrocosm.

Starting from this notion of least action, all intelligible forms of constructible existence in visible (discrete manifold) space are generated without additional axioms or postulates, and by methods excluding any employment of deductive methods. All notions of axiomatic discreteness of “matter” are excluded; this elimination of axiomatic discreteness forces us, as Kepler exemplifies this for the foundations of comprehensive modern forms of mathematical physics, to eliminate the relatively distinct notions of matter, space, and time, and to introduce physical space-time instead.

It is to be emphasized that Cusa’s 1440 De Docta Ignorantia already establishes a true “non-Euclidean geometry,” one entirely distinct in notions of method, as well as axioms and postulates, from the deductive system of Euclid’s Elements. This non-Euclidean (constructive) geometric method, premised upon no assumption but the principle of least action, is the underlying distinction in method within the more fundamental qualities of work of Pacioli, Leonardo, Kepler, De�sargues, Fermat, Pascal, Leibniz, Gauss, Riemann, et al.

View full size
Multiply-connected circular action.

In constructive geometry, as in the elementary form synthetic geometry elaborated by Professor Jacob Steiner et al., the existence of “points” and “straight lines” is constructed, thus eliminating all assumptions of linearity and axiomatic discreteness embedded in all deductive method. Multiply-connected circular action suffices to generate both of these linear forms from nothing but continuous circular action; both points and straight lines appear as singularities, discontinuities, or boundary conditions generated by continuous least action.

So, Pacioli prefigured the work of Leonhard Euler et al. in treatment of Leibniz’s analysis situs, and in a more refined examination of the matter of the platonic solids. The Golden Section, as the boundary condition defining the limits of intelligible representation of construction within visible space, expresses the self-boundedness of visible space.

This work of Pacioli et al., as elaborated by Kepler, defined, by the onset of the seventeenth century, two facts about our universe as a whole. First, that all living processes are characterized by an harmonic ordering of growth which is congruent with the Golden Section. Second, Kepler’s proof, that the most general laws of ordering of the universe are also governed by the same harmonic ordering otherwise peculiar to the growth and activities of healthy living organisms.

It is also the case, that on the atomic and sub-atomic scale, events are organized harmonically according to the same principles manifest in Kepler’s system.

Thus, at the two extremes of scale, and in the instance of living processes, the picture of the laws of the universe manifest to us in terms of the discrete (visible) manifold, is that of harmonic orderings congruent with the Golden Section. Between the two extremes of scale, any process which is so characterized is either a living process, or a special class of work by a living process. All processes not so characterized are non-living, in the sense that Kepler identifies the distinction in his paper on the snowflake.

Thus, a strong hypothesis for the mathematics of living processes, must locate the harmonic ordering characteristic of living processes within the atomic scale of physical phase-space. It appears, at first inspection of the evidence, that the ordering of living processes is “teleologically” ordered, such that whatever healthy living processes do, the result is congruent harmonically with the Golden Section. Therefore, it is the first rule for elementary statements respecting living processes, that we must situate those statements within the geometric ordering congruent with the Golden Section, an ordering whose root is the Golden Section harmonics embedded within the phase-space of processes on the atomic scale.

View full size
Conic self-similar spiral
Simple spiral action in the complex domain (left) is cylindrical in form; at one-half rotation, the distance moved along the vertical z-axis is one half the distance moved along the z-axis by a full rotation. The radius at one-half rotation is the arithmetic mean (α+β)/2. In conical spiral action, the radius at one-half rotation is the geometric mean, √(α β).

2.11 Beyond the Visible Domain

Harmonic orderings congruent with the Golden Section are the limit of intelligible constructability within a visible space defined in terms of multiply-connected, circular forms of physical least action. They represent so the inherently self- bounded quality of the visible manifold. Yet, we can construct forms which go beyond those limits, provided that we shift the location of construction to the Gauss-Riemann complex domain; this latter is simply the domain defined through the replacing of circular least action by self-similar-(conical)-spiral least action.

From the higher vantage-point so defined, the visible domain is the projection (upon, for example, the brain’s visual cortex) of processes in the higher-order space, the complex domain, upon the visible domain. Since the higher domain is characterized by conic self-similar-spiral action as the form of multiply-connected least action, the characteristic feature of the projection is the Golden Section, which appears within the lower domain, the discrete manifold, as the characteristic form of self-bounding of the lower domain. (Conformal projections in Riemannian space make this connection transparent.)

The Gauss-Riemann complex domain is not the only form of the complex domain conceivable. The Fourier domain is also a complex domain, defined in terms of multiply-connected, self-similar spiral action: helical, or “cylindric” action. Yet, Fourier Analysis can not render intelligible certain classes of functions which actually exist: continuous functions which subsume discontinuities (singularities). The multiply-connected, self-similar-spiral form of least action renders such continuous functions intelligibly constructible. Implicitly, as Riemann addressed this potentiality, any seemingly arbitrary function is susceptible of intelligible—constructive and trigonometric representation in the Gauss-Riemann complex domain.

The bare form of the Riemann Surface function illustrates the point.

Prudently, the constructive synthesis of the Gauss-Riemann complex domain should begin, pedagogically, with an intensive examination of Gauss’s treatment of the arithmetic-geometric mean. This is simple self-similar-spiral action, examined solely in terms of strictly determined elliptic cross-sections of a single or double rotation of the spiral generating the cone.

We examine these constructions two-foldly, as constructions within the cone generated, and as projections of those constructions upon the plane. The conic generation and its characteristics represents the mental image of the most elementary aspect of the complex domain, and the plane projections prototypical of the corresponding images in the visual domain (discrete manifold).

We translate these constructions into their descriptions, the trigonometric functions which describe the generation of the cone, and also of each construction within that generation. We view this as a more advanced, more adequate representation of the corresponding arguments of Kepler. That is to stress the point: we re-examine all of the conceptions of Kepler, especially the most crucial ones, from this starting-point in Gauss-Riemann physics.

We observe, that the plane projection of the elliptic cross-sections corresponding to the harmonically ordered divisions of one cycle of the cone’s generation, define the focus of the ellipse coinciding with the cone’s axis as the Keplerian “Sun” of the elliptic functions. We note the significance of the perihelial/aphelial ratios of perimetric action from Kepler in these terms of reference.

Most notably, we show that the Keplerian orbits, so situated, are least-action pathways. In conventional physics-language today, these are force-free pathways. The relevant work of Drs. Winston Bostick, James D. Wells, Robert Moon, et al. comes directly into play as a standpoint of reference for our discussion of this. We include emphasis upon Dr. Moon’s work on geometric determination of the periodic table and its properties, and fine-structure constant, and correlate this treatment of the microphysical form of the fine-structure constant with Dr. Benedetto Soldano’s related work on differences between gravitational and inertial mass for the astrophysical scale. We emphasize the electromagnetic standpoint of reference, adopting the starting-point of the progress of Gauss, Weber, Riemann, and Beltrami in electrodynamics.

We emphasize such notions as Riemannian induced transparency of the physical space-time (phase-space) medium for propagation of electromagnetic action. We emphasize, in this connection, the notions of retarded potential for both propagation of induced transparency and propagation of the wave or wave-pulse itself. We are concerned to define synthetic geometric constructions for each of the physical propositions, and to render these fully intelligible by aid of methods of strong hypothesis.

In this mode, we pass to the more general case for synthesis of the Gauss-Riemann complex domain. Our next construction, is the construction of doubly-connected self-similar-spiral action. This case introduces the generation of true singularities (as distinct from the singularities of elementary, circular-action synthetic geometry of the visible domain). This gives new physical meaning to the importance of hyperbolic trigonometries, in addition to the circular, elliptic, and parabolic trigonometries subsumed by simple self-similar-spiral action. This also introduces the simplest form of the notion of a Riemann Surface function’s conformal mapping.

This simplest expression of the Riemann Surface function’s conformal projection shows already how and why a properly defined continuous function may generate discontinuities (null points in topological continuity) and yet remain continuous as a function. Hence, from this standpoint, the case for a doubly-connected self-similar-spiral action makes necessary, according to the Dirichlet Principle employed by Riemann, the triply-connected self-similar-spiral action’s domain, and the hyperspherical trigonometries so generated. It is useful to think of a Riemann Surface as a Gauss-Dirichlet-Weierstrass-Riemann Surface, as Dirichlet emphasized the situating of the case by Gauss’s work, and as Riemann situated his own work with respect to the topological principle of Dirichlet and the principle of the famous Weierstrass function.

This is more warmly appreciated as a fully intelligible principle from the vantage-point of 1871-1883 work of Georg Cantor. The most important specific proposition from the work of Cantor, is the notion that the number of discontinuities within an arbitrarily small interval of a continuous trigonometric function (in the complex domain) is implicitly enumerable. The derived function, of enumerability of a rate of increase of such density of discontinuities, is the form of expression of the strong-hypothetical characteristics of the Gauss-Riemann domain which bears most directly and pervasively upon proper choice of mathematical physics for living processes.

Looking backwards from Cantor’s indicated work, to the work of Riemann, situating Cantor’s notions of transfinite orderings as specific to the Gauss-Riemann domain, illuminates the latter, and enables us to continue in the proper further directions beyond the accomplishments of the former.

Most specifically, we locate ontological actuality as existing efficiently within the complex domain so defined. Only those functions which correspond to assured continuity of cause-effect in the Gauss-Riemann complex domain represent for us ontological elementarity of existence. Hence, the universe is ontologically transfinite.

That means, for example, that ontology is efficiently located by no less adequate means than functions for transfinite orderings corresponding to an ordering of changes in the rate of increase of the density of discontinuities (singularities) per interval of multiply-connected self-similar-spiral action (i.e., negentropy). This is the general form of the function required for intelligible representation of living processes (as, for intelligible representation of physical-economic processes).

This is our meaning when we say: It is continuous functions which subsume, potentially, increasing density of discontinuities (singularities) for any chosen interval of action, which meet the minimum requirement for representation of living processes. Such functions, comprehended as statements in Gauss-Riemann synthetic geometry, are the intelligible form of negentropy—in opposition to the unintelligible statistical-thermodynamics definition.

2.20 The ‘Force-Free’ Requirement

Kepler already shows, that, to adduce the general laws of physics, we must eliminate all consideration of notions of forces acting among discrete bodies. We must adduce the laws of the universe from nothing but the geometry of physical space-time as a true continuum.

The fine-structure constant, for example, illustrates the significance of this. So does the definition of the speed of light, if that definition is made intelligible in terms of the Gauss-Riemann domain; the correct reformulation of Max Planck’s argument for the necessity of the quantum constants is a by-product of this determination of the speed of light.

For example. Assume any value for the rate of propagation of simple, cylindric-helical (self-similar) propagation of radiation, with the mere requirement that this be a constant value, whatever that value might be assumed to be. This is the value for force-free (least-action) radiation, not subject to retardation of the potential rate of propagation by any medium. A medium is distinguished, in physical geometry, as a density of singularities per interval of action.

Such radiation in the complex domain has zero values each cycle, defining a quantum of force-free action (least action).

This has richer meaning in the self-similar-spiral domain, and still richer in the multiply-connected such domain. Implicitly, all of the characteristic dimensional constants of physical phase-space are derived from this physical geometry as a physical geometry of continuous physical space-time. All of the fundamental laws of physics (and biophysics) must be properly stated in terms of dimensionless constants so given intelligible representation.

The more adequate statements are those obtained by applying the Gauss-Riemann domain retrospectively to the work of Kepler, to derive a Keplerian physics more adequate than that developed by Kepler himself. In other words, every crucial proposition in Kepler must be reconstructed in terms of the Gauss-Riemann domain.

Kepler employs the preceding work of, chiefly, Cusa, Pacioli, and Leonardo, to unify the geometry of living processes with that of astrophysics. We know that the Gauss-Riemann recasting of the Keplerian geometry of astrophysics is also the geometry of microphysics. Thus, all strong hypotheses in physics must situate all general statements, those corresponding, in power of argument, to general physical laws. We must treat physical space-time as triply-self-bounded experimentally, by the extremes of scale, of microphysics and astrophysics, and by the characteristics of living processes as living processes. A strong hypothesis is thus one intrinsically true with respect to all three bounding conditions taken as one general condition.

Reference should be made to Riemann’s posthumously published criticisms of the work of the anti-Kantian Herbart, with emphasis on the antinomies included in those papers. The standpoint of the initial, seminal papers which Riemann produced through 1854 under the direction of Gauss, is efficiently located in these posthumously published commentaries on Herbart’s work.

Whatever we say of the fundamental principles of astrophysics must be shown to be true for microphysics and living processes as well, and similarly for all combinations of the three.

The characteristic of all physical space-time geometry, is that it is internally self-bounded by harmonic orderings which, in the discrete manifold, are congruent with the Golden Section. Why this must be so, is made intelligible by the characteristic of the Gauss-Riemann domain: multiply-connected self-similar- spiral action. The pathways of action corresponding to these harmonically ordered values are least-action pathways, and thus the relatively most-force-free pathways of action.

This prescribes a definition of fundamental laws in terms of a generalized notion of dimensionless constants, including the intelligible representation of the construction of the fine-structure constant. The Gauss-Riemann correction of Keplerian harmonic orderings is the generalized notion of all such dimensionless constants. They are dimensionless, because they defy the deductive assumption of ontological discreteness peculiar to all parodies of a Euclid-Descartes manifold, and are simply the physical geometry of a physical space-time continuum, in which singularities are generated without tolerating notions of self-evident existence of discreteness.

NASA, ESA, CXC, JPL-Caltech, J. Hester and A. Loll (Arizona State Univ.), R. Gehrz (Univ. Minn.), and STScI
“Whatever we say of the fundamental principles of astrophysics must be shown to be true for microphysics and living processes as well, and similarly for all combinations of the three.” Shown: The Crab Nebula, from the Hubble Space Telescope.

So, rather than attempting to account for the existence of apparent or actual force-free states from the standpoint of “classical physics,” we treat force-free states as the ground-states of matter, in which the laws of the universe are most proximately manifest, and derive the existence of conditions appearing to exhibit force from the force-free states of matter. We accomplish this in the only way this can be managed, by treating the physical geometry of the Gauss-Riemann domain not merely as a method, but as a direct representation of the physical composition of cause-effect in the universe.

In reviewing Dr. Webb’s manuscript, we observe that that which he attempts to situate, as biophysical evidence, within his representation, begs precisely this approach. Our proposed approach would supply the best representation of his argument. The implied task, is to work through each phase of his argument from this fresh standpoint. Thus, we lose nothing of his contributions as a biophysicist, while placing his essential, biophysical observations on the more appropriate basis. It is the peculiar value of his attempt to construct a case in terms of “classical physics,” that the thoroughness of his endeavor states the case in the digested terms most suited to our own additional treatment of the experimental evidence he correlates.

2.20 ‘Non-Linearity’

The formal mathematical definition of “non-linearity,” is an empirically continuous process which is more or less densely populated with actual or potential singularities (discontinuities), and this to the effect that no linear statement of the function could bridge these discontinuities.

From the standpoint of strong hypothesis, we would find such a definition acceptable up to a point, but otherwise inadequate. The more adequate definition can be approached on two successive levels.

First, with respect to deductive systems as a whole, a “non-linearity” has the form of a modification, “midstream,” of at least some among the underlying postulates of the system.

This is analogous to the action accomplished by a Socratic dialogue (As Plato’s “Socrates” says: “my dialectical method.”). The critical examination of a proposition, through successive peeling away of underlying implicit assumptions, leads to some modification of an underlying, implicitly required postulate of that proposition, and to a new proposition, replacing that criticized, premised upon a correction of the faulty postulate. This is the method of strong hypothesis, another term for Plato’s “dialectical method,” as distinct from that of Aristotle, Kant, Hegel, et al.

Our use of strong hypothesis, refers to a higher form of the ordinary aspect of that dialectical method, which Plato represents as the hypothesis of the higher hypothesis. The domain of action of the latter is strong hypothesis applied to higher-order transfinite orderings, such as the elementary ontological ordering-principle—changing rate of increase of density of enumerable discontinuities, as the metric of negentropy—we have identified here.

In Riemann’s 1854 “On the Hypotheses Which Underlie Geometry,” this is given the initial, approximate representation, in terms of alterations of degrees of freedom of a function, to the effect of changing the characteristic metric of action in physical space-time (phase-space). It is the generalization of the point of that dissertation from the vantage-point of the Riemann Surface, and its indicated representation by a neo-Cantorian transfinite ordering, as we have indicated this, which best defines the meaning of non-linear for most usages in mathematical physics.

This brings us to the second, more adequate representation of “non-linearity” of continuous functions, from a standpoint consistent with our strong hypothesis.

The adequate representation depends upon elimination of the axiomatic, interdependent notions of discreteness and linearity intrinsic to all deductive lattices. We have already indicated that linearity is but the complement to the notion of axiomatic discreteness. We have already indicated also, that our ontology—that required for study of the characteristics of living processes defining them as living—prohibits all axiomatic notions of either discreteness or linearity, by the introduction of the notion of physical space-time, to replace entirely the Euclid-Descartes notions of elemental distinctions among matter, space, and time.

In physics today, we are cruelly burdened by the popular assumption, that “physically elementary” is signified by that which is primitively countable arithmetically, and the presumed elementarity of linearity. Hence, the notions of physical laws are stated in terms of scalar (discrete) magnitudes, together with linear notions of space and time. This is a cruel burden, since all truly elementary statements are non-linear propositions in the Gauss-Riemann complex domain.

It is this mistaken approach to representation of fundamental and other physical laws, the which prevents such a mathematical physics (or, biophysics) from rendering intelligible such elementary notions as “creation” and “life.” It is this which causes the actuality of “creation” and “life” to fall between the cracks of statements in acceptable forms of deductive logic, and of a mathematical physics defined formally in terms of a deductive logic. The axiomatic assumption of discreteness and linearity is the vicious root of these formal difficulties; without eradicating these complementary, axiomatic assumptions of all deductive systems, a valid astrophysics, microphysics, and biophysics is impossible, in each and all cases.

The solution is most simply represented by the statement, that discreteness and linearity are brought into existence within the discrete manifold by that multiply-connected form of continuous least action which is axiomatically neither discrete nor linear. Hence, the mere existence of discreteness or linearity is a product of “creation” so defined: the generation of true singularities by an adequately defined notion of continuous function. On no less a basis than this correction, can either “creation” or “life” be rendered intelligible.

2.30 ‘Non-Thermal’

The experimentally false argument that electronic agents of biological warfare destroy targets through “thermal effects,” actually signifies two very large assumptions.

First, it assumes the scale of caloric measure of molecular biological events, on the scale of either the cells as such or some large element of the cell. The phenomena relevant to use of non-linear electromagnetic effects for biological warfare, may be viewed as the electronic equivalent of poisoning of the targetted tissue by the most powerful biological agent imaginable. Even from the “thermal” standpoint, we are dealing with events on the scale of quanta/phonons.

Thus, the proponents of the “thermal-only” dogma, are making arguments which are most kindly rebutted as being in error by orders of magnitude.

Second, underlying the thermal argument more deeply, is the superimposition of the axiomatics of deductive lattices in the guise of such axiomatic assumptions widely adopted by molecular biology. The events which primarily distinguish living from dead tissue experimentally, involve non-linear phase-shifts in electromagnetic pulses on the scale of quanta.

The aspect of Webb’s manuscript bearing upon this matter is most crucial for our work: the treatment of protons and electrons, as well as photons, as “standing waves,” is key. This is the point of departure for our examination of the physics of Webb’s manuscript.

For example, in tuning to the brain alpha waves, at circa 8 Hertz, our concern must be the modulation of those waves by non-linear pulses (“solitons,” “chirps”). This presents us a challenge in design of instrumentation and methods for study of brain waves generally, and, obviously, other tissues.

We may say, for purposes of broad description, that “life,” as distinct from presently accepted notions of molecular biology, is characteristically electromagnetic in these indicated terms of non-linear reference. Hence, crucial experiments in this domain must show, that we can destroy or strengthen life, with non-linear electromagnetic pulses, without actions defined in terms of presently accepted notions of molecular biology. Hence, the error in the “thermal-only” dogma, is not merely that it is orders of magnitude off scale in thermodynamic terms; it ignores the point that molecular biology is the medium of biophysics as such, rather than life being an epiphenomenon of molecular biology as presently defined. I use “medium” in the sense of “medium” of induced electromagnetic transparency and of retarded potential for propagation of electromagnetic pulses.

The phenomena to be measured are situated within a physical phase-space within the atomic scale. Larger molecular structures are both “wave guides,” and function also as very complex “lasing devices” within which the essential actions occur on the scale of atomic phase-space. The source of the negentropy which is generated in this sub-feature of the molecular biological medium, is the “Keplerian” negentropy already inherent in sub-atomic phase-space, as we have indicated the more adequate Gauss-Riemann reconstruction of the Keplerian universe.

Thus, sub-atomic phase-space must be mapped in terms of Gauss-Riemann least action (e.g., “dimensional constants”), and thus given intelligible representation on an ostensible “force-free” elementary basis, with no explicit or implicit assumptions of discreteness or linearity to be tolerated.

Once we introduce axiomatic assumptions of discreteness and linearity, we exclude axiomatically from experimental inquiry the class of phenomena which is most crucial. Webb’s manuscript, like related work in non-linear electromagnetic characteristics of living processes, demands this approach as the only hope for a true solution to the propositions emerging from experimental work.

3.0 Policy Implications

For the reasons so summarized, our urgent work of promoting crash programs of research and development in both electronic agents of biological warfare, and AIDS research, will encounter a dogmatic force of resistance much greater than encountered in our promotion of the SDI since February 1982. The resistance to be encountered will be both the politics internal to science, as we have implicitly stressed here, and also Soviet and Soviet-fostered political and strategic resistance.

Politically, it is of the utmost urgency to Moscow strategically, that the West not effect leaps in scientific fundamentals. This pertains not only to military applications of discoveries. It pertains also, equally emphatically, to Moscow’s opposition to any economic recovery in the West, and to Moscow’s interest in opposing anything which might foster a renewal of scientific, and hence cultural and political optimism within western civilization.

Otherwise, we must recognize that this experimental work challenges most directly the fundamental axiomatic assumptions prevailing in taught science today. Even an aversive glance in direction of an axiom which a scientist has learned to treasure all his life, an axiom he considers integral to his status as a scientific professional, has usually evoked red-eyed fanaticism by professionals against those who seem to regard such an axiom as merely unnecessary. The angered reaction will be Kantian, as Heinrich Heine’s Religion and Philosophy in Germany points to the homicidal brutishness simmering in the tortured soul of every Kantian.

Notwithstanding the political objections to scientific progress so identified, this progress must be forced through rapidly. The combined urgency of mastering the AIDS pandemic and Soviet work on electromagnetic strategic-assault weaponry, identifies this scientific progress as indispensable for the very continued existence of our civilization.

We have thus come, in this quarter as well as others, to the point in recent history at which the cultivated habit of toleration for preferences in opinion and “life-style,” must give way to the requirement that no opinion is any better than its scientific truthfulness. That which is not truthful in this sense, is wrong, and persons who cling to untruthful sentiment are culturally inferior, and less moral than those who cling to passion for nothing but truth. The continued existence of our civilization can not longer tolerate political and scientific practice based on the irrationalist and immoral dogma of “tolerance” for opinion per se. Liberalism must now die, so that mankind, and civilization may live. There is no middle ground, no room for compromise, between the two.


[fn_1]. In a 1981 article, LaRouche described the ICLC (International Caucus of Labor Committees) “as an international academy movement, consciously modeled in intent and practice upon such precedents as Plato’s Academy at Athens, and tracing its heritage through Philo, Augustinian Christianity, the Arab Renaissance, and the 15th-century Golden Renaissance . . . in existence since 1973-1974, based chiefly in the U.S.A., Canada, Latin America, and Western Europe.” [back to text for fn_1]

Back to top    Go to home page

clear
clear
clear