Sunday, April 6, 2025

We don't know what meaning is

Gödel, criticizing a paper by Turing, remarked on how 'concepts'  are grasped by the mind in different ways, that certain concepts can become clearer, sharper and richer as time goes on. 'Concept' can be taken to mean one's conception of something (which can involve more than the 'psychological' as understood by Frege and Husserl)  or it can mean the thing (the ideal unity of all adequate conceptions of the concept) of which one has a (possible imperfect) conception of.  Gödel's observation may not apply to all concepts, but only to some, for instance mathematical or metaphysical concepts.  The clarification, sharpening and enriching of one's concept of a concept has to be carried out through logical and intuitive exercise (which will involve interaction with other concepts). This exercise will have a spiral structure, for one will return to the concept again and again but now in a slightly different light.

We have no idea how concepts, meanings, intentions, references are related or even what these things are. We have no idea how their mereology works or the nature of this relation. 

Do we think of concepts or do we think through concepts ? And at a given moment can we be thinking of more than one concept of be thinking via more than one concept ? When I think of the concept of prime number am I also thinking of the concept of number ?

What Frege got wrong was not knowing that the purity and objectivity he postulated in thought, meaning and reference is only an approximate ideal which depends on the exercise and training of the mind.  How can we understand (think of) this pure thought in its activity ?

In other words there is vagueness and there is clarity and objectivity - and there is a path and exercise leading from one to the other.   But ordinary conceptions and meanings of the ordinary mind - maybe these do not have any one definite clear objective counterpart.  These are simulacra, pseudo-conceptions, shadows.  

“Good Morning!” said Bilbo, and he meant it. The sun was shining, and the grass was very green. But Gandalf looked at him from under long bushy eyebrows that stuck out further than the brim of his shady hat.
“What do you mean?” he said. “Do you wish me a good morning, or mean that it is a good morning whether I want it or not; or that you feel good this morning; or that it is a morning to be good on?”
“All of them at once,” said Bilbo.

A note on formal philosophy

The following preliminary text needs to be corrected and the final considerations clarified and expanded in light of Platonic dialectics.

Is a philosophy a subject, an activity of authentic value, capable of genuine progress, worthy to stand alongside mathematics, the sciences and engineering ? This has been much discussed. One very few have proposed is that maybe mainly only ancient philosophy (both of the west and the east) is of value – is authentic philosophy – and that authentic philosophy in the western post-classical era has remained a very illusive, hidden tradition. The iconoclastic position is not so difficult to reconcile with our exposition of phenomenological metaphilosophy. But here we take a radically distinct point of view – a view which does not however discard the psycho-therapeutic and ethical value of phenomenology.

So then what is ‘good’ or ‘authentic’ philosophy (we refer to this metaphilosophical ideal as logical formalism LF) ? . Here are its essential characteristics:

1. it keeps mental habits, ill-defined concepts and prejudices from insinuating themselves into philosophy, in particular in a cloaked or transposed form.

2. it is deeply concerned with the question: What is an argument ? (in particular: What is a valid argument ?)

3. it is deeply concerned with the question: How does language work ?

4. it holds up pure mathematics as the canon of knowledge and it follows that philosophical concepts, theories and arguments (proofs) in order to be valid must be able to be presented, expounded and checked in exactly the same way as mathematics.

Furthermore we can divide 4 into

4a. acknowledging 4 as the canon and goal of philosophy

4b. actually realizing this goal in partial or full detail

A corollary of 4 is:

Authentic philosophy is not possible without an adequate formalization of a sufficiently rich fragment of natural language.

We see also that ‘linguistics’ (in the post-Saussurean and contemporary sense) is a major part of philosophy.

Here we wish to present the antiquity thesis:

By and large we find a larger presence of LF in pre-modern philosophy than in modern philosophy (with several very important exceptions).

To show this we can study the 4 characteristics in the Peripatetic, Stoic, P latonic/Academic/Neoplatonic and Pyrrhonian schools. To show this is the case for Aristotle is the ultimate motivation for our paper ‘Aristotle’s Second-Order Logic’.

But there were those heroes of early modernity that had this metaphilosophical ideal, philosophers, logicians, linguists/lexicographers (like Wilkins)– however lacking they were in the actual realization of this philosophy (if not falsely presenting their work as being more geometrico when it is not even close). It is unnecessary to go through the luminaries of the much maligned “rationalist” tradition of early modern Europe. We wish however to make the following points:

1. The alleged failure of characteristic 1. The religious influence in rationalism is in fact far less (and less specifically Christian rather than Hellenic) than in all the powerful concealed or transposed forms which it took in subsequent philosophy.

2. The genial insight and far-ranging influence of Descartes is not appreciated enough (and the same goes for medieval philosopher Jean Buridan).

3. The rival “empiricist” tradition is also surprisingly aligned to the ideal and rigor of LF.

Paradoxically there is far more religious influence in the specifically 19th and 20th century evolutionary kinds of naturalism (as well as Heidegger) than in 17th century rationalism.

It is trendy to blame Descartes for introducing so-called “mechanism”, “mathematization of nature” and much of what is bad in western civilization: we reply with the challenge to define what exactly they mean by “mechanism” and refer the reader to the discussion on determinism, computability and differential models of nature. Descartes’ low point is his abhorrent view on animals (found also in Malebranche) which would seem to proceed not from logical argument but from inherited scholastic dogma. In fact Descartes’ (comparative) physiology might be easily interpreted as furnishing powerful arguments for animal rights (cf. the improved views of Leibniz) which already found a 17th century voice in Shakespeare.

We can question whether German idealism be not actually very far from this metaphilosophical ideal and if we do not find also a frequent conceptual and naturalistic transposition of Christianity into this philosophy order to make it more palatable and apparently compatible with the perceived progress of science and social changes (as well as the tastes of Romantic art). The conceptual and argumentative aspects of its texts do not seem, at first glance, very close to the ideal of mathematics: sometimes this is explicitly acknowledged, taken as a virtue (as in several passages in Hegel). We find alleged ‘deductions’ (in Kant and Fichte) which are difficult to see as proofs in the logical or mathematical sense.

Jules Vuillemin wrote a book about Kant’s intuitionism. While it is certainly reasonable to allow for a relation between primitive concepts (and axioms) and intuition, Kant’s use of intuition in the form of the synthetic a priori is very different. Schopenhauer has pointed out the inconsistent definitions of many key terms in the Critique of Pure Reason.

It would be impossible to discuss Bolzano, Cantor, Frege, Peirce, Peano and other important figures in the second half of the 19th century as well as 20th century philosophy without going into detail about points 2,3 and 4, something which would go far beyond the scope of this short note. If Frege represents a revival of Leibniz’s characteristica project (another aspect was developed in Roget’s Thesaurus, an underrated work with strong philosophical roots) he also represents (according to Bobzien) a conscious re-emergence of some of the core elements of Stoic philosophy. We argue in “Aristotle’s Second-Order Logic” that Frege’s second-order logic is simply the logic and metalogic of Aristotle’s Organon (although we need an alternative way of presenting natural deduction closer to natural language reasoning).

We must make the important observation that so-called formal and symbolic logic became part of the education and interest of certain philosophical schools, but as a rule in a very deceptive and misleading way if we are looking for the kind of metaphilosophical ideal in question (Wittgenstein does not seem to have much in common with it). It is very important to study certain non-mainstream philosophical currents in French philosophy of 19th century and the first half of the 20th century (among both the “spiritualist” and ontologist schools but also among such noted thinkers as Brunschvicg, Rougier, Vuillemin, Cavaillès, etc.). Neokantianism however fails because of its defective logic inherited from Kant, its confused account of intuition and its typical Kantian dogmatic assumptions about the limits of reason.

After so-called ‘early analytic philosophy’ (Frege, the early Russell, Carnap but also lesser known contributions by Hilbert, Mally, the Polish school of mereology, etc.) anything approximating LF was lost sight of in the analytic philosophy mainstream and has to be careful looked for and investigated. The project of formalizing natural language has been carried out in ways less interested in logic and in the definition of philosophically relevant concepts. LF -relevant work is found outside official academic philosophy among linguists and researchers in artificial intelligence and knowledge representation (like John Sowa) – and most specially in mathematical general systems theory – a mathematical model theory encompassing consciousness, living systems, social organization and every kind of scientific and engineering domain.

It is worthwhile to examine in detail the re-emergence of metaphysics in analytic philosophy since the 1980s (specially the work of Timothy Williamson and Edward Zalta).

We must find a reconciliation between LF and universal phenomenology (UP). Notice how Descartes is a key figure for both and how both share the same high regard for ancient philosophy. They both esteem Hume. They both are opposed to inferentialism and meaning-as-use theories. If Husserl has an obvious connection to UP, the work of Claire Ortiz Hill has shown that LF-related concerns run deep as well in Husserl with a close connection to Hilbert. A similar situation is found Gödel (see J. Kennedy and Mark van Atten: Gödel’s Philosophical Development). Gödel was not only enthusiastic about the phenomenological method but considered also the quest for the primitive terms and their axioms to be a viable alternative. Even Kant never ceased to dream of a kind of Leibnizean project.

To effect this synthesis or reconciliation we can take inspiration from how there is a mutually helping and corrective feedback loop between insight and formal deduction in actual mathematical practice. Descartes called deduction the intuition of the relation between intuitions.

It would seem however that LF cannot itself furnish the higher or ultimate foundations for logic or mathematics itself, specifically with regards to combinatorics, number theory and recursion theory – thus it would seem that LF already assumes that a large portion of logical and linguistic issues have been settled and thus it serves more as a tool for second philosophy. It would appear thus that LF cannot in itself completely solve the problems in the philosophy of logic, philosophy of language, theory of knowledge and metaphysics.

We are led to the difficult problems of the self-reflection of formal systems and the self-foundation of LF. The idea of self-foundation and self-positing. Category theory seems to be relevant here as a conceptually rich and multidimensional formal system which yet differs in its structure and use from classical logico-deductive systems. In Category theory concepts can be co-implicit in each other; there is a facility of passing to the meta-level inside the system, proofs are more analytic in the sense of involving generally an unpacking of concepts employing only minimal logic. Category theory’s ascent into abstraction bears a similar relation to ordinary mathematics as Descartes’ analytic geometry did to Euclidean geometry.

Maybe we need an entirely new self-reflective concept of formal systems and the role of formal systems. Maybe the activity itself of doing LF can manifest or show something higher though this can never be expressed or deduced in a formal system of LF. This again is an instance of the feedback loop aforementioned, which echoes the famous letter of Plato.

Beyond phenomenology

The text (text 1) that follows expressed an attempt to find a universal characterization of phenomenology which can be applied both to antiquity and modernity, to the west and the east (there is of course already a substantial literature comparing for instance, German idealism or Husserlian phenomenology with Buddhism philosophy or with Advaita Vedanta).

The description given here is of course very crude (and to some it might recall both Hume and the Abhidhamma).   This kind of phenomenology (of 'inner sense elements')  is lost in the realm of shadows and in fact deals with artificial abstractions. The basic impetus if of course correct, but the ordinary shadow-immersed and shadow-watching mind does not have the force to free itself from this realm in order to be able to know the shadow realm from a point of view  beyond this lowest realm.  This is discussed in (text 2).  The required force is given by Platonic dialectics and this reveals the truth that the stream of ordinary consciousness is actually first and foremost a stream of (immanent) concepts.

(text 1)


The most basic idea is that of a methodology based on a pure, calm, detached awareness and observation of consciousness as it is in itself, as it presents itself to itself without being colored or altered by any presuppositions or modifications.

Instead of perceiving consciousness as a part of the world, the world is perceived as a part of consciousness, as immanent in consciousness. And it seems that what is involved in hindering this shift of perspective is forgetfulness and non-awareness of certain fundamental constitutive elements of consciousness (and the world): temporality and the active subjective nature of recollection and imagination which dominates ordinary experience. The world with its persons and objects that we are normally so completely entangled with and engaged with reveals itself to be upon careful analysis a temporal flux of subjectively constituted structures based on combinations of imagined and recollected inner sense elements. And we have the tendency to perceive and make use of 'wholes' forgetting that they are wholes and not perceiving the parthood of their parts or the way these parts are brought or bring themselves together. It would be interesting to develop a theory of forgetfulness and remembrance and of the different degrees and modes in which things can be brought to mind and be persistently present (not immediate but yet ‘at hand’, a kind of threshold awareness) in connection to the study of the terms sati and sampajañña in Pali buddhism. We can adduce evidence that this methodology is clearly set out in ancient eastern and western texts as well as in modern times (specially the tradition the runs through Descartes, Hume, Kant and Brentano). Of special importance is the concept of sâksin in Advaita Vedanta (see the book by Bina Gupta on this subject). Also buried within the Pali suttas we find injunctions to develop a special type of neutral awareness and analytic attention to experience. We encounter expressions such as diṭṭhe diṭṭhamattaṃ bhavissati, sute sutamattaṃ bhavissati, mute mutamattaṃ bhavissati, viññāte viññātamattaṃ bhavissati, Samyutta Nikâya 35.95 (and in the Bāhiyasuttaṃ of the Udâna), 'in the seen there will be merely what is seen, in the heard merely what is heard, in the sensed there will be merely what is sensed, in the cognized there will be only what is cognized'. This same sutta also contains passages involving past, present and future modes of presentation of the contents of consciousness.

The text that follows can be considered as a preliminary motivation for the necessity of the Platonic dialectics for anything resembling the goals of this 'phenomenology'.  The Platonic dialectic furnishes also a crucial distinction between immanent concept (which is more than just the 'psychological component' which was the object of a certain scorn by both Husserl and Frege) and a pure and transcendent 'concept' which is not readily available to ordinary consciousness.

 (text 2)

Many would object that ordinary consciousness can only effect this self-transparency and self-reflection imperfectly and to a very limited extent – and by this we do not mean only a Kantian-type postulation of epistemic limitations but also the difficulty of first-person self-transparency in a more ordinary psychological sense (further ahead it will become clear why we consider the kind of limitation postulated by neuroreductionism to be invalid). And hence a kind of cultivation of consciousness is required in which, so to speak, consciousness goes out beyond itself without abolishing its ordinary processes so that these processes can be seen in the most perfectly objective way. The guiding idea is the possibility of consciousness stepping outside itself and becoming integrally and clearly aware of itself: transcendental self-transparency. There is a connection to deeper significance of the arguments of 19th century anti-psychologism (such as Frege's) although we note that the basis of the aforementioned anti-psychologism can already be found in Kant.

One aspect of this transcendental self-transparency is a state of consciousness in which we are directly and primarily aware of the stream or current of our thoughts seen as thoughts (i.e. the totality of the world and experience is seen as derived, dependent, constituted and immanent in the current of our thoughts). But the most fundamental aspect is understanding the whole of consciousness as a process which unfolds (from a unified to a more fragmentary state like the growth of a tree) directed by root causal forces; transcendental self-transparency hinges on the ability to obtain contact and seize control of the above root causal forces and thereby obtain the ability to freely invert, revert and reintegrate the whole out-folding process of consciousness. Phenomenology is not mere detached gazing at the shadows on the wall, it must have an anagogic dialectical component.

This stepping outside oneself has applications to psychotherapy and self-development. This is tragically lacking in modern western philosophy: a 'practical reason' acting not on the world but on consciousness itself and specially on certain aspects which are not afforded an adequate role in most modern theorizing. Ancient philosophy involved psychotherapy. A further topic to be explored would be that of the philosophy of lucid dreaming and the role of dreams in many historical philosophers. Also we mention that meditation has always been a deeply appealing and yet rather elusive endeavor. The reason for lack of solid progress seems to be unawareness that meditation is not an activity or study like others for which a scheduled time is set aside for, thereby hoping that progress will be directly related to the intensity and time of practice. Rather the precondition for progress involves a total reformation and overhaul of one's everyday life habits and mental patterns. Once this global reform has become established and solid, which can likened to diffuse light, meditation can then take place as a kind of focus or diffraction of this energy. But this local active engagement in meditation can also in turn serve as a tool for such a global reform. There is another problem: if meditation can be likened to the escape from the Cave, then what is the role of phenomenology here ? It is certainly not focusing on the shadows qua shadows. Rather it is a vertical phenomenology that pertains to effecting a fundamental insight leading to conversion and an impetus to escape.

The following text is not really interesting except as pointing out some important facts about certain forms of meditation and the body. The kind of atomism found in Hume and the abhidhamma is radically false.

(text 3)

We can attach great phenomenological importance to the body, feeling and the senses. But this in a way distinct from mainstream body-centered phenomenology which involves naturalist assumptions. Indeed the ultimate goal is the very opposite: we study and acknowledge the embodiment of consciousness in order, so to speak, to ultimately disembody and denaturalize phenomenology. And by 'body' we mean the internal first-person experience of the body including the analysis of elements of consciousness according to the different sense-fields and their associated neurological systems. It will be seen that the inner experience and concept of the body is related to the constitution of personal identity and the self-in-the-world. Also that the body can play a central feedback role in the previous methodology of transcendental self-transparency. There is also much that could be said about the methodological value of the more general contemplation of the composite structure and processes of the natural world.

We could compare physical (Democritus, etc.) and psychological atomism (abhidhamma, Hume). Atomism fails to give an account of space and time both in its subjective and allegedly objective modes and of why these two aspects could coexist. It gives no account of gravity or electromagnetic fields, of the nature of 'atoms' themselves, why and how they causally influence each other from any distance, how they can be substances bearing properties and relations, how they can exist 'in' space, how they can change and yet be indivisible, have individual identity, why they should follow a priori mathematical laws, etc. Atomism offers no account of mathematics or intentionality or meaning. It cannot account for the body of experimental evidence supporting the claim that fundamental physics is essentially a theory of fields rather than particles (which are abstractions and epiphenomena without fixed identities). The relations between the atoms is just as important if not more than the atoms themselves. It is anthropomorphic to think of atoms as free independent individuals. The equations that determine the relations show that atoms are just nodes inseparable and existing only as a part of holistic structure (like a tapestry). Atomism does not explain space, time or change and that it is plausible that timeless space-time is what actually exists (and hence atoms are space-time tubes or curves each one determining and determined by every other one) and that instantaneous present time exists relative only to a particular consciousness. However atomism remains (once it discards its absolutist claims) an important paradigm of rationality.


The following text is a study of the relationship between phenomenology, ancient skepticism and Hegel. These consideration are just preliminary and far from being correct. It is important to study in depth the relationship between Platonic dialectics and ancient skepticism and Hegel. Our previous note on Analyticity and the A Priori is relevant from skepticism. Hegelian dialectics can only become a part of Platonic dialectics through a careful study of the logical and mathematical core present therein.  The goal of Hegelian dialectics (just as the goal of phenomenology) is quite illusory outside the central theory and practice of Platonic dialectics and specially its mathematical component.


(text 4)

Following phenomenological methodology we hope that we can obtain direct global insight into higher primordial constitutive structures and processes of consciousness, notably with regards to the complex domains commonly labeled under the terms 'self', 'agent', 'knower', personal identity and specially the domain of concepts and the process of reason itself. Key formal aspects are self-reference, self-modification, self-positing, self-othering, return-to-self, self-transcendence. We also propose to elucidate the subtleties of the doctrine of non-self (anattâ) in the oldest most authentic subtrate of the Pali canon, the part concerned with purely philosophical, moral and yogic elements. See C.I. Beckwith's Greek Buddha for argumentation for a purely philosophical and yogic (in the sense at aiming at the liberation of the mind) form of original Buddhism virtually identical in content to Pyrrhonism - a thesis which is strengthened by Hegel's interpretation of Sextus in his Lessons on the History of Philosophy. Bina Gupta has shown that the Vedanta and the other main darshanas can be approached from a purely philosophical point of view.

An important goal is to obtain not merely a theoretical understanding but direct intuition of the radically different nature of what was previously apprehended and taken to be our ordinary 'self' and personal identity. Also to show how cognition of objects is tied to self-consciousness (as Dennis Schulting argues in Kant's Radical Subjectivism).

It is interesting to present the following passages from Hegel. In the Encyclopedia Logic:


In other words, every man, when he thinks and considers his thoughts, will discover by the experience of his consciousness that they possess the character of universality as well as the other aspects of thought to be afterwards enumerated. We assume of course that his powers of attention and abstraction have undergone a previous training, enabling him to observe correctly the evidence of his consciousness and his conceptions.

And in the Lessons in the History of Philosophy:

The two formal moments in this sceptical culture are firstly the power of consciousness to go back from itself, and to take as its object the whole that is present, itself and its operation included. The second moment is to grasp the form in which a proposition, with whose content our consciousness is in any way occupied, exists. An undeveloped consciousness, on the other hand, usually knows nothing of what is present in addition to the content.

Phenomenology can contribute to a philosophical corrective reconciliation and synthesis between disparate philosophical views, both ancient and modern, eastern and western (but without implying any teleological progress). One key to this endeavor will attaching importance to a radical new interpretations and evaluations of the influence and significance of Pyrrhonism, Stoicism and the phenomenism of David Hume. Also to show that Plotinus has much to offer in the form of rigorous philosophy. Porphyry in his Life of Plotinus states that both the Stoic and Peripatetic doctrines are sunk in the work of his teacher. Preliminary work towards this goal will involve bringing to light neglected agreements and correspondences between different philosophical schools. For example: the dialectics of the middle Platonic dialogues is quite close to the argument structure of Sextus. So too is later neoplatonism's preoccupation with the ineffability of the Good. The connection of Pyrrhonism to the Sophists needs to be explored. Aristotle's De Anima has a striking agreement with eastern systems such as Yoga and Samkhya and the ideal sage of the Nichomachean Ethics (as well as the ethics of Democritus) differs little from the Stoic sage or eastern Yogi.

Plotinus indirectly engages with the sceptical later Academy through Saint Augustine's Contra Academicos (in this work we find the remarkable definition: I call world whatever appears to me.) but such an engagement is already found in middle Platonism, for instance in Nummenius as well as among the Stoics (as reported by Cicero's Academica). It is illuminating to compare the psychology and epistemology found in the Stoics and Academics to their counterparts in the Pali suttas. There are some surprising Plotinean anticipations of Kant, some of which seem to correspond even in the very phrasing (see our paper Aristotle’s Analysis of Consciousness). Hume's system (which has close affinities to the scepticism of the later Academy) is very illuminating despite its errors and shortcomings: a careful unveiling of these last that leads directly towards progress in phenomenology. Also Hegel's reading of Sextus (in the Lessons in the History of Philosophy) provides a powerful corroboration to the thesis defended in C.I. Beckwith's book Greek Buddha.

In Sextus the transcendental subject abstains (epikhein) from positing finite determinations either sensuous or rational as the truth - while at the same time considering how consciounsess 'goes back from itself', considering its very operation in addition to the content. 

The following text is more interesting and can be reinterpreted as a justification for Platonic dialectics (as a vertical phenomenology although the use of this term is perhaps not at all appropriate) - although of course there is much that needs to be corrected and clarified. Indeed the problems raised at there end can be solved by Platonic dialectics.  The text  very correctly points out the essential role of pure reason, specially pure mathematical reason for the feasibility of the attainment of the goals of phenomenology.

(text 5)

Let us make the distinction between horizontal and vertical phenomenology more clear. The distinction is based foremost on the nature of the goal that is to be achieved, something what is in turn reflected in the methodology. Horizontal phenomenology’s goal is to reach back to the grounds of consciousness itself in order to explain and find the ultimate foundations of the world and ordinary consciousness. The goal is not to transcend the world or even to overcome ordinary consciousness (as if this could be a goal in itself) but rather to find an ultimate justification and source of meaning for the world. It is as if the prisoners in Plato’s Cave were interested in the objects behind them only for the sake of guaranteeing that the shadowy spectacle in from of them could be given a certain consistency and meaning, and thus their whole life as prisoners be in some sense consolidated and less rather more likely to be questioned. The prisoner-philosophers would be very interested in and focused on the shadows as shadows and study all their forms and variations and attempt to reconstruct the source-object as a kind of invariant. Vertical phenomenology on the other hand is concerned primarily with the escape from the Cave. The awareness of the shadows qua shadows is inseparable from the awareness of the chains and knowledge of the light-source. Its methodology is all about chain-breaking, light-seeking and truth-seeing and the shadows are studied both in their unreality and as factors of limitation and conditioning (horizontal phenomenology is incorporated but never as an end in itself). Vertical phenomenology is at once epistemic, ontological and ethical. It can be found most clearly in the systems of Yoga, Samkhya and Advaita Vedanta as well as in Plotinus.

As already mentioned the Pali texts are complex, heterogenous and difficult to interpret though there are grounds for postulating on oldest purely philosophical, yogic and ethical substrate. What is curious is that a kind of horizontal phenomenology seems to play an important role in the form of the well-known practice of satipatthana, although there can be little doubt that this takes place in the context of the aims of vertical phenomenology. However satipatthana can taken out of context and appropriated in the form a horizontal phenomenological psychology or psychotherapy and we can ask if certain ‘mindfulness’ or ‘living in the present moment’ practices are not really hindering the prisoners’ propensities to break free and seek the Truth.

The concept we wish to put forth in this section is that horizontal phenomenology is a vital element of a valid vertical phenomenology. This is because ordinary consciousness has a dual nature consisting of those more properly conscious elements and those that form a kind of complex substrate acting “behind” our more conscious experience. Thus the awareness of ordinary awareness needs to increase its range, its rays need to shine upon all the hidden obscure corners and recesses of our habits, conditioning and buried memories. The cultivation of a horizontal phenomenology as outlined in the first sections of this note appears thus as an important even necessary step along the path to self-transparency. Without this preparation the prisoner cannot hope to escape from Cave, rather she will only meet with more shadows.

Language permeates consciousness and our representation of the world yet just as the primordial constituting factors we discussed above are normally forgotten so too do we usually lack the transcendental awareness of the manifestation of language qua language (in particular inner verbal discourse) in our conscious experience. We find the idea of the philosophy of language as phenomenology well worth exploring (and there appears to be some significant connection to Zen/Ch'an).

How can we apply logic and language to determine the relationship between logic and language themselves and something which is beyond logic or language ? The philosophy of logic and the philosophy of language can be seen as part of phenomenology for logic and language are key parts of the structure and dynamics of consciousness. While standard attempts to theorize a foundation of logic itself will evidently depend on logic, there is away out of the predicament once we move to consciousness.

How are we to understand the theory that logic and language are precisely aspects of the structure and dynamics of consciousness (but not ordinary consciousness in the sense of psychologism) when to understand any process we must presuppose logic and language ? Hegel offers this solution in his Introduction to the Science of Logic:

Die Logik dagegen kann keine dieser Formen der Reflexion oder Regeln und Gesetze des Denkens voraussetzen, denn sie machen einen Theil ihres Inhalts selbst aus und haben erst innerhalb ihrer begründet zu werden. Nicht nur aber die Angabe der wissenschaftlichen Methode, sondern auch der Begriff selbst der Wissenschaft überhaupt gehört zu ihrem Inhalte, und zwar macht er ihr letztes Resultat aus; was sie ist, kann sie daher nicht voraussagen, sondern ihre ganze Abhandlung bringt dieß Wissen von ihr selbst erst als ihr Letztes und als ihre Vollendung hervor.

Logic, on the contrary, cannot presuppose any of these forms of reflection, these rules and laws of thinking, for they are part of its content and they first have to be established within it. And it is not just the declaration of scientific method but the concept itself of science as such that belongs to its content and even makes up its final result. Logic, therefore, cannot say what it is in advance, rather does this knowledge of itself only emerge as the final result and completion of its whole treatment. (Di Giovanni tr.)

To understand this more fully we would need to understand more fully how the Phenomenology of Spirit is articulated with the Science of Logic.

But let us consider another point. What is the relationship between pure rational activity (such as pure mathematics) and phenomenology ? And how are we to understand the anagogic role of the Platonic dialectic (which was also accepted by Plotinus), if we take this dialectic in the sense of the anagogic use of pure reason (as in Gödel) ? Also, how does Hegel’s concept of Logic compare to that of the Plotinean self-knowledge of the nous or the lower form intellection of the soul’s logoi as conceived by Proclus ?

An alternative to the horizontal vs. vertical distinction might be simply that phenomenology itself represents an early less advanced stage of the anagogic path (concerned more with appearances, images, opinions and the relations of ordinary consciousness – a mere antechamber of Truth) which must ultimately give place to something else.

Sunday, March 30, 2025

Three metaphilosophies

We have proposed three metaphilosophies.  Phenomenological metaphilosophy involves understanding the timeless and universal principles of the phenomenological method and program which are found across a great variety of different philosophical systems, times and places.  Formal metaphilosophy takes a highly skeptical view of common philosophical practice with a focus on the logical and linguistic aspects and proposes a methodology based on axiomatic-deductive systems and rigorous definition of all concepts involved in all philosophical arguments and debates. Critical metaphilosophy (inspired by Frege, the early Husserl, Gödel, Gellner (backed by Russell), Mundle, Preston, Findlay, A. Wierzbicka, J. Fodor, C. Ortiz Hill, Rosado Haddock and Unger, John W. Cook and also some considerations of Marcuse) questions the value of much of 20th and 21st century philosophy from a predominantly logical and linguistic point of view as well paying great attention to presupposed or insinuated materialist hypotheses found therein (Preston and Unger should have engaged in exhibiting substantial textual evidence for  'scientiphicalism').  Every accusation is a confession and if linguistic philosophy/ordinary language philosophy is patently bad philosophy, the kind of condemnations it engaged in towards previous philosophy prophetically turned out to apply remarkable well to itself. And indeed this linguistic philosophy never ceased to be an underlying powerful force until today despite its various disguises and apparently sanitized versions, including analytic metaphysics. A true scientific linguistics deployed in a critical metaphilosophical way  is what is called for - the study of the psychology, sociology and linguistics of professional philosophy/sophistry. Even in non-orthodox philosophers in this tradition (Robert Hanna, George Bealer)  we find a strong presence of many of its assumptions and rhetorical-argumentative patterns.

We do not loose sight of the hard problems and limitations involved both in phenomenological and formal metaphilosophy.  

Why use the term phenomenology rather than psychology or introspective psychology.  For the greatest of problems involves what is most primordially given. And how can truth be found or based on anything but this ? The goal of philosophy is to see fully, to know fully, it is self-transparency and liberation.  Locke, Berkeley and Hume dealt with the deepest and most fundamental, most fertile of all questions. They looked in the right direction and had the right perspective. The great question: what is a concept ? Without concepts there is no logic, no language, no reason, no knowledge.  Can we admit knowledge without any conceptuality ? Or mind without conceptuality ? Certainly ordinary knowledge involves concepts. Even asking about knowledge and truth, are we not asking about concepts ? Is not truth a concept ? Is not knowledge a concept ? And do we have a concept of a concept even it is an unclear, vague, definition-lacking concept ?

And what about ethics, specially an ethics based on compassion ? Schopenhauer, as we mentioned before, offers us a purely phenomenological ethics based on compassion which thus would appear to have a non-conceptual anchor.

If philosophy is foremost a quest for individual clarity and knowledge regarding one's own consciousness, how can we express the truth we find to others ?  How can we argue, how can we persuade ? What are the rules which must govern or direct this argument or persuasion ? There are no arguments without concepts. If we do not know what a concept is, we do not know what an argument is. We have a concept of concept yet this concept is not an adequate concept. We can know things and yet not know how to define them. Sentences express concepts (they can be nominalized) just as adjectives, adverbs, nouns, verbs, pronouns, etc.  And concepts are not vague. It is difficult to find two different words (from hundreds of thousands) which have exactly the same meaning. If meaning boundaries were fluid we would not expect this to happen.

Without concepts there is no language. It is erroneous and foolish to go about theories of language and profess to talk about the mind without first venturing into the vast realm of the philosophy of concepts. 

Our stream of consciousness is not a stream of sensations or recollected images of simple sensations but includes a stream of concepts (in-consciousness concepts, not Fregean concepts obviously).

As a temporary remedy for this state of affairs we propose formal philosophy, carrying out philosophical arguments in an entirely mathematical fashion.

Husserl's Logical Investigations is a great textbook in philosophy, a kind of summa of the best psychological introspective, logical, linguistic and ontological work of the 19th and 18th centuries.  Likewise Frege is a model of clarity and elegance - regardless of one's views.

The danger of philosophical introspective (and transformative) psychology is turning into mere psychotherapy or psychoanalysis or becoming uncritically influenced by occultism and religion. Equally harmful is naturalism and neuro-reductionism and behaviorism and the dogmas of 'linguistic philosophy' or 'ordinary language philosophy'. Speech acts and language games are still abstracted, isolated, analyzed and understood conceptually.

The dilemma here seems to be between staying safely at the periphery or venturing to where lurks the great danger of religion, occultism and cults.  Philosophy is indeed a psychotherapy which aims heroically to overcome the deep ingrained conditioning of religion and materialism alike (cf. Gödel's statement: religion for the masses, materialism for the intellectuals). There are no royal roads or shortcuts in philosophy.  See this essay by Tragasser and van Atten on Gödel, Brouwer and the Common Core thesis. Gödel's theory, as recounted by the authors, is of utmost significance. Gödel was promoting the restoration of the authentic meaning of Plato's dialectics and the role of mathematics expounded in the Republic and other texts.  Perhaps Gödel has pointed out the best path (at once philosophical and self-developmental) (for so called "Western man" ) which avoids the double pitfall of materialism and religion/psychotherapy/occultism. In the 21st century (inheriting from the 20th century) we are inundated by the cult of the irrational, by anti-rationalism in every conceivable and subtle and insidious form. The "rational" is only allowed to thrive in its most miserable, limited and adulterated form, harnessed and enslaved to the lowest materialistic/technological/economical/military goals.  And the technological and economical goals here do not even aim at the common good and equal and fair distribution of the earth's resources.

And here is what is remarkable about the Platonic-Gödelian method: the confluence between pure mathematical thought and introspective transformative philosophical psychology.  But this project can be discerned in Husserl's Logical Investigations and Claire Ortiz Hill has written extensively about the objective, formal and logicãl aspect of this work, in particular the important connection to Hilbert's lesser known philosophical thought.  However the psychological and phenomenological aspect is just as important, just not in the way of the later Husserl, rather in the Platonic-Gödelian and transformative philosophical psychological way.

The epokhê as Husserl outlined is not possible (and even less is the Heidegger alternative valid), rather such a clarity and 'transcendental experience'  is possible through the Platonic-Gödelian method.

George Bealer (1944-2025)

 https://dailynous.com/2025/01/22/george-bealer-1944-2025/

Friday, March 28, 2025

Fundamental problem in the philosophy of logic

The fundamental problem in the philosophy of logic is understanding the nature and meaning of formal logic, that is,  so-called mathematical or symbolic logic.

The key notion involved is that of self-representation and self-reflection.

We have informal but rigorous proofs concerning abstract axiomatic systems. Then we have abstract axiomatic systems representing reasoning and proof concerned with abstract axiomatic systems. But then we must prove that a given structure is a proof of a proposition in the same way we prove a proposition in the object axiomatic system. And we require an abstract axiomatic system to reason about proofs in the deductive system - or to prove soundness and consistency.  But how do we prove that what we informally can prove we can also formally prove ?

In order to carry out deductions we must have the concepts of rule and what it means to apply a rule correctly. Likewise we must have the concepts of game and goal. The concept of rule is tied to logic and computability. 

The concept of game includes counting, computing and reasoning.

Kant's question: how is pure mathematics possible ? should not have gone the way of synthetic a priori intuitions but rather to the question: how is formal mathematical proof possible ? That is, how would Leibniz's characteristica be possible ?

Hilbert's treatment of geometry vs. Kant.

Another problem involves the countability of linguistic expressions vs. the possible uncountability of objects.  It follows that there are uncountably many indefinable objects which hence cannot be uniquely identified. Any property they have they must share with other such objects.

We find  the term 'sociologism' very apt to describe the 'linguistic turn'  (meaning-as-use, inferentialism) of Wittgenstein, Ryle, Austin and it continuation in Sellars, Brandom, etc. There is a strict parallelism with the earlier psychologism. It is likewise untenable. It is part of the physicalist assault against the mind, consciousness, individually accessible knowledge and truth (for example a priori moral, logical and mathematical truth) and moral conscience and freedom. It is a pseudo-scepticism and pseudo-relativism/conventionalism  and is ultimately nonsensical. It is reductionism (grabbed from neuroreductionism and functionalism) and is circular.  While sociology is a legitimate scientific discipline, sociologism is not based on science and is bad philosophy.

The idea that meaning of the term 'and' can be given by exhibiting a rule does not appear to be very cogent.

A: What does 'and' mean ?
B. That's simple. IF you postulate a sentence A as being true *AND* a sentence B as being true THEN you can postulate that the sentence "A and B" is true (and vice-versa).
A: I asked for you to define 'and' and you gave me an explanation that uses 'and', 'if...then', 'being true' and the concept of judgment. Sorry, that just won't do ! 

 It is also obvious that A may be possible to infer from B but that a person that accepts A is not sociologically obliged in anyway to state or defend B, for example, Fermat's last theorem before its proof by Wiles.  Any adequate language for fully describing the full range of sociological behavior, norms and practices is at least Turing complete.  So appeals to sociology cannot be used to furnish foundations for either logic or language.

Sociologism stands Frege on his head. It is a transposition to the social plane of the false dogma of functionalism and behaviourism.

Given a sentence S we can consider the recursively enumerable (but not recursive) set I(S) of all sentence which can be inferred from S in a system T.  Clearly I(S) cannot count as the meaning of S. Elementary number theory abounds in statements involving only elementary concepts the truth and inferentiability of which is not known.

Recommended reading: C. W. Mundle - A Critique of Linguistic Philosophy (Oxford, 1970).

Another strand of linguistic philosophy which seeks to undermine the certainty, clarity, objectivity and a priority of knowledge has roots in the later Wittgenstein's theories of polymorphism and his assault on definitions and meanings (but see the discussion in the Theatetus). In its current form it revolves around what we call 'the cult of vagueness'.

The cult of vagueness attempts to undermine the clarity, precision and non-ambiguity of language, and most importantly the language of philosophy, ethics, psychology - not to mention logic, mathematics and science.  Two of its sources are the  'paradoxes' and obvious peculiarities of certain natural language elements, specially the more homely and down-to-earth terms like 'bald' and 'cup' - there is nothing strange about certain adjectives having a trifold decomposition.  Of course to do this it has to assume a certain doctrine about language and its relation to the mind and the world.

The meaning of a property can be crystal clear and yet the application of the property can be difficult and uncertain. And it is only uncertain because the meaning is clear.

The cult of vagueness has its own peculiar rhetorical style which involves never stating one's assumptions clearly but only insinuating them.  

Erroneous theory of 'semantic relations' including 'speech acts' like 'whispering'.   What do they mean by act (and old Aristotelean metaphysical concept)  ? And whispering is a quality of speech not a semantic relation. For instance 'Mary whispered the nonsense spell she read in the book' has no semantic component. 

Anna Wierzbicka's distinction between folk and scientific concept demolishes the cult of vagueness.  Our low level concepts do not have definitions in the technical sense, they have stories. They are also dynamic and socio-specific.  Thus it is a category mistake to concoct arguments which ignore this distinction.

Linguistics depends on psychology and the philosophy of mind but these last depend on language.

Most adjectives and many nouns are not analogous to mathematical properties such as 'prime number'.  Negation functions differently. Often the adjective property has a tripartite structure, for instance 'tall', 'short' and 'medium height'.  Thus is somebody is not tall is does not mean they are short.  These folk concepts (having the possibility of a fair range of adjectival and adverbial degree modifiers) can give place to scientific ones which generally will involve scale, a measure.  Temperature is measured by different instruments. There is a limit of precision and variations across measurements by different instruments or the same instrument at different times.  But this does not make the concept of temperature vague or ambiguous. In fact statistical concepts are not vague even if as properties they cannot describe the state of a system in a unique way.

We can transpose Gödel's arguments to Zalta's Object Logic.  Instead of numerical coding of formulas we use the encoding relation for properties and objects.  We can thus define predicates for an object codifying only a certain property, only a certain sentence, and only a proof of a certain sentence Proof(p,a) where p is to be seen as codifying a sequence of sentences.  Then we can define Diag(a,b) iff a encodes the proposition Bb where b encodes only property B.  Then we can construct the Gödel sentence by taking the formula G (property) λz.¬x,yProof(x,y)&Diag(z,y) which is encoded by g to construct the Gödel sentence Gg.

Consider a reference relation between expressions and objects. Suppose that there were uncountably infinitely many objects.  Then:

i) either there are objects which cannot be referred to by any definite description

ii) or there are objects which share all their properties with infinitely other objects (indiscernability)

Or infinitely many objects with one binary relation. There are uncountably infinitely many possible states of affairs which cannot thus be referred to in a unique way. The same argument applies.  And of course arguments involving categoricity.

"Speech acts", the vagueness of ordinary terms...this is already found in Husserl's Logical Investigation (see for instance vol II, Book I). And previously in Benno Erdmann. 

Meaning and psychology: the great question.  Consciousness is so much more than the lower sphere of (mainly audio-visual) fantasy and imagination processes.  When we think of the concept of prime number or the concept of 'meaningless sentence'...and of course there is the Fregean view.

Multiplicity of psychological experience in the meaning phenomenon. But we can abstract a type, a species of what is invariable. Husserl is lead from here to ideal objects à la Frege, the space of pure meanings. But in the first Logical Investigations when Husserl discusses the psychological content of abstract expressions, how these are very poor, fluctuating and even totally non-existent and hence cannot be identified with meanings. But Husserl mentions the hypothesis of a rich subconscious psychological content involved. What is going on really when we think of "prime number" ? Do we have a subconscious web of experience reaching back to when we first learnt the concept ? And could not all this ultimately correspond to a kind of formal rule such as : if a divides p then a is 1 or p,  or if a is not 1 or p then a does not divide p ? There is nothing social here or only in the most vague and general way. An extended and rectified Hilbertian view can be seen as depth phenomenology perhaps, specially in light of modern formal mathematics projects.

A priority, certainty, as well as intersubjective agreement - all this depends on recursion theory and arithmetic or its 'deep logic'. Logos is a web of relations which is not relative. 

Meinong's Hume Studies: Part I: Meinong's Nominalism

Meinong's Hume Studies: Part II. Meinong's Analysis of Relations

The deep meaning of Gödel's incompleteness theorem is the mutual inclusion of the triad: logic, arithmetic and recursion theory. 

Gödel's rotating universe.  Individual subjective time that parametrizes a path need to have any simple correspondence with cosmic time which implies a global foliation by hypersurfaces.

Computability, determinism and analyticity

An overlooked but nevertheless very important problem concerns the role of the differentiable and smooth categories in mathematical physics, that is, the category of maps having continuous up to a certain order or all order. Our question is: why use such a class of maps rather than (real) analytic ones (or semi-analytic) ?  The equations of physics have analytic coeficients.  Known solutions are analytic (for simplicity we do not distinguish analytic from meromorphic).  In fact known solution are analytic having power series representations with computable coeficients (for a standard notion of computability for sequences of real numbers).  And in fact all numerical methods for mathematical physics depend on working in the domain of computable analytic functions.

The class of computable analytic functions is related to the problem of integration of elementary functions. It is also very elegant and simple in itself as it reduces the problems of the foundation of analysis (infinitesimals, non-standard analysis) to the algebra of  (convergent) power series.

Deep results in the theory of smooth maps depend on the theory of several complex variables.

The existence and domain of analytic solutions to analytic equations is an interesting and difficult area of mathematics.  Several results rule out associating analytic equations with any kind of global determinism (in the terminology of Poincaré, solutions in power series diverge). That is if we wish to equate determinism with computability and thus with computable analytic functions in physics. Thus that computable determinism is essentially local is not philosophy but a hard result in mathematics.

An interesting mathematical questions: are there analytic equations which (locally) admit smooth but analytic solutions ? 

Another vexing question: why are there not abundantly more applications of the theory of functions of several complex variables (and complex analytic geometry) to mathematical physics ?

There are objections against the analytic class.  For instance it rules out the test function used in distribution theory or more generally functions with compact support.  Thus we cannot represent a completely localized field or soliton wave (but notice how Newton's law of gravitation posits that a single mass will influence the totality of space).  And yet the most general functions constructed  (like the test function) are often simply the result of gluing together analytic functions along a certain boundary. Most concrete examples of  smooth function but not analytic functions are precisely of this sort. We could call these piecewise analytic maps. Thus additional arguments are required to justify why we have to go beyond piecewise analytic maps.  An obvious objection would be: distributions and weak solutions. But here again we invoke the theory of hyperfunctions.  It seems plausible that there could be a piecewise analytic version of distribution theory (using sheaf cohomology) - even a computable piecewise analytic version.

In another note we investigate other incarnations of computability in mathematical physics (and their possible role in interpreting quantum theory).  Can we consider measurable but not continuous maps which are yet computable ? Together with obvious examples of locally or almost-everyhwere (except on a computable analytic set) computable analytic maps we can seek for examples of nowhere continuous measurable maps which are yet computable (in some adequate sense). The philosophy behind this is the computable determinism may go beyond the differential and analytic category, the equations of physics in this case however only expressing (in a non-exhaustively determining way) measure-theoretic properties of the solutions. 


We end with a discussion of what constitutes exactly a computable real analytic function and how we can define the most interesting and natural classes of such functions. Obvious examples are so-called ’elementary functions’ which have very simple coeficient series in their Taylor expansions. Also it is clearly interesting to study real analytic functions whose coeficient series  are computable in terms of n. And can we decide mechanically when one of these functions is elementary ?
Consider the class of real elementary functions defined on a real interval I. These are real analytic functions. How can we characterise their power series ? That is, what can we say about the series of their coeficients ? For instance there are coeficients an given by rational functions in n , or given by combinations of rational functions and factorials functions, primitive recursive coeficients, coeficients given by recurrence relations, etc. It is easy to give an example of a real analytic function which is not elementary. Just solve the equation x′′ − tx = 0 using power series. This equation is known not to have any non-trivial elementary solution, in fact it has no Liouville solution (indefinite integrals of elementary functions).
Let ELEM be the problem: given a convergent Taylor series, does it represent an elementary function ? Let INT be the problem: given an elementary function does it have a primitive/indefinite integral which is an elementary function ? An observation that can be made is that if ELEM is decidable then so too is INT. Given an elementary function write down its Taylor series and integrate each term. Then apply the decision procedure for ELEM (of course we must be more precise here, this is just the general idea). Thus to show that ELEM is undecidable it suffices to show that INT is.
In the literature there is defined the class holomonic functions which can be characterised
either as:

1) Being solutions of a homogenous linear differential equation with polynomial coeficients.

2) Having Taylor series coeficients given by polynomial recurrence relations.

There is an algorithm to pass between these two presentations. The holonomic class includes elementary functions, the hypergeometric functions, the Bessel function, etc. The question naturally arises: given a sequence of real numbers, is it decidable if they obey a polynomial recurrence relation ?

Tuesday, March 25, 2025

Additions to 'Hegel and modern topology'

The something and the other. In the Science of Logic Hegel thinks of the idea of two things bearing the same relation to each other and indistinguishable in any other way. Now a good illustration of this situation in found in the two possible orientations of a vector space. Each bears the same relation to the other and there is absolutely no way to uniquely identify one of them in distinction to the other.  In the same place Hegel talks about the meaning of proper names and states that they have none !

Hegel's strategy. All of Hegel is based on the dynamics and structure of consciousness as it reveals itself to itself. But equally important it is all based on positing this structure of consciousness  to be essentially objective and not subjective.  Thus 'being a subject' is seen as a stage in the development of the object while mere  'subjectivity' is seen as a failure and partiality of objective consciousness in not living up to its full objectivity or actuality (or knowledge and realization thereof).

Thus ideality and infinity are the key structure of an object which has developed to a point of being a subject. 

Finitude and limit:  a closed set.  Yet the boundary also defines what the set is not (cf. discussion on Heyting algebra of open sets, etc.). Thus the boundary contains implicitly its own negation.

Limitation : an open set (also analytic continuation). Limit outside itself.

The ought... manifestation of the germ of a space through a given open set representation of the equivalence class. Any particular one is insufficient and can be replaced by another which is also insufficient.

False infinity: taking simply the set (or diagram) of such  open set representatives.  True infinity taking the limit (equivalence class) or limit (in the categorical sense)..

Degrees of interpenetration and transparency between the individual and the universal (and between self-consciousness and essence) in the Phenomenology of Spirit.  From the rudimentary form in the ethical life to: the universal is in the individual, the individual in the universal and the universal is in the relation between individuals and the relation between individuals is in the universal (mutual confession and forgiveness).

Tuesday, March 4, 2025

A central problem of philosophy

To us a central problem of philosophy is to elucidate the relationship between the following three domains of (apparent) reality /experience:

1. logic and language

2. mind and consciousness

3. an objective or external world 

While the the relationship between 2 and 3 is a classical topic which has produced an immense literature,  the deeper problem seems to be the relationship between 1 and 2 and 1 and 3.

My question is: how can my anti-inferentialism and anti-anti-representationalism and anti-functionalism be expressed in terms of such relationships ? 

A preliminary and useful questions: what was logic and language for Leibniz, Kant, Fichte, Hegel and Schopenhauer ? What was logic and language for Frege, Brentano and Husserl ? 

Why was not anti-psychologism accompanied by a corresponding anti-physicalism ? 

Another question: how does one's view of the relationship between 2 and 3 condition one's view of the relations 1-2 and 1-3 ? For instance, is the physicalist or idealist somehow conditioned in (or by) their views on logic and language ?

How are we to understand the theory that logic and language are precisely aspects of the self-interaction (self-reflection)  of a universal (super-individual) consciousness when to understand any process we must presuppose logic and language ?

Die Logik dagegen kann keine dieser Formen der Reflexion oder Regeln und Gesetze des Denkens voraussetzen, denn sie machen einen Theil ihres Inhalts selbst aus und haben erst innerhalb ihrer begründet zu werden. Nicht nur aber die Angabe der wissenschaftlichen Methode, sondern auch der Begriff selbst der Wissenschaft überhaupt gehört zu ihrem Inhalte, und zwar macht er ihr letztes Resultat aus; was sie ist, kann sie daher nicht voraussagen, sondern ihre ganze Abhandlung bringt dieß Wissen von ihr selbst erst als ihr Letztes und als ihre Vollendung hervor. (Hegel)

How can we apply logic and language to determine the relationship between logic and language themselves and something which is beyond logic or language ?

Can we develop the theory that a certain super-logical super-linguistic, non-logical and non-logical consciousness and cognition is necessary and useful ?  We have sketched a theory of analyticity based on computability and from this perspective the super-logical can been seen as unfolded in the hierarchy of degrees of hyper-computability. A logical pluralism which yet has nothing conventional or arbitrary about it.

Consciousness, experience, cognition, life...these are (at least potentially) infinitely more vast than abstract conceptual 'thought' in the ordinary sense of the word. We need to see thought as a multilayered structure and process part of a larger enveloping and grounding structure and process...

Also: I do not see any weighty argument against my own contention that the central problem of the philosophy of logic is simply: what is an argument, in particular what is a so-called 'valid' or 'persuasive' argument ? What is a sophistical argument (or a sophistical worldview) ? 

Tuesday, February 18, 2025

Mathematical theory of habit

It does not seem, at first glance, easy to express the concept of habit mathematically, that is, express the concept from a mathematical general systems framework.  Habit is a very difficult notion to grasp as is the way causality enters in.  Image a system S which temporally evolves through a state space Q.  Consider two "small" disjoint regions A and B in Q.   Then we can observe the situation X in which the system in a state in A in a given time T1 and then in B in a subsequent time T2 (were interval [T1,T2] must be smaller than some bound W  passing through a neighbourhood U of a certain path in Q linking A to B.  Now in the evolution of the system S we may observe that behaviour X because more common and frequent as time goes along, as if X develops the "habit" of X.  Is the developing of habit X a special case of X being an attractor in the dynamical-system sense ? Could this evolution of system S be specified by a causal law ?  Not an infinitesimal law but a finite interval (probabilistic) law. If system is in state A then its evolution in  an interval with bound W is determining by the most frequent path starting at A in the past. Its present behavior in situation A is determined by the highest unanimity of the processes of previous A-cases. In a physical sense the repetition of X can be seen as exerting a causal influence in the future, a kind of morphogenetic field. We can also conceive a hierarchical organization of a behavior X decomposable into smaller behaviors X1,...,Xn all subject to the law of habit. There is a kind of self-reference involved, a weak kind of self-determining behavior.

Now things get interesting if we consider the system as spatialized so that the state is assigned not merely to a given time but also to a region (point) in space.  Now we can postulate the frequency of behavior X in a location L1  influences causally the frequency (at a later time) of the same behavior in a distinct location L2. Thus we distinguish between self-inductive habit and propagating habit (cf. solutions to certain PDE  expressed via Green's functions or integral operators). This propagating aspect (perhaps not instantaneous) can be compared for instance to the creation of electromagnetic radiation through periodically moving charges.  Thus local behavior is determined not only by the past habit of that region but the sum-total of the habits of other regions. This is of course a profoundly holistic or "holomorphic" situation.

Friday, January 17, 2025

Hume, the most misunderstood philosopher

We grant that the Treatise may not be a entirely consistent work and that its precise aim may still be quite unclear.  But this does not erase the fact that Hume has suffered historically from being appropriated, perverted and misrepresented by subsequent generations.   Hume has had only few serious readers or quasi-genuine readers such as Kant, T. H. Green, Brentano, Meinong, Husserl and Whitehead.

The problem with Hume is that he does not seem to be able to make up his mind if he is engaging in a radical philosophy in the style of Descartes or in a rational and experimental psychology.

The philosophy of Hume is radically incompatible with subsequent naturalism, so-called empiricism or logical positivism.

The philosophy of Hume is not compatible with the kind of relativism or skepticism exemplified by Sextus Empiricus (whom Hume most certainly read).  On the contrary Hume values highly evidence and rigorous proof.  Consider this beautifully embarrassing passage from part II of section II (Book I):

But here we may observe, that nothing can be more absurd, than this custom of calling a difficulty what pretends to be a demonstration, and endeavouring by that means to elude its force and evidence. It is not in demonstrations as in probabilities, that difficulties can take place, and one argument counter-ballance another, and diminish its authority. A demonstration, if just, admits of no opposite difficulty; and if not just, it is a mere sophism, and consequently can never be a difficulty. It is either irresistible, or has no manner of force. To talk therefore of objections and replies, and ballancing of arguments in such a question as this, is to confess, either that human reason is nothing but a play of words, or that the person himself, who talks so, has not a Capacity equal to such subjects. Demonstrations may be difficult to be comprehended, because of abstractedness of the subject; but can never have such difficulties as will weaken their authority, when once they are comprehended. 

Here Hume is no Pyrrhonist. However it also seems at the end of Book I in the part where he touches upon his personal psychological problems (or rather, on the practical aspect of philosophy) and literary ambitions that Hume is confessing Pyrrhonism or rather a kind of Carneadian scepticism closer to the later Academia (such as was not uncommon at his time): cf. the motifs of living according to nature, being guided by veridical appearance, etc.  (Husserl also agrees with this in the chapters dedicated to Hume in the Krisis). We have already dealt with a refutation of such a stance. And indeed appealing to plausibility or probabilities is fallacious for it leaves open the question of the degree of plausibility of inferences and judgments about plausibility.  Hence long enough chains of probabilistic reasoning with only probable rules will be all improbable. Hume's take on the classical argument against scepticism in part I of section IV might be construed as the definite statement of how Hume juggled these contradictions: the arguments of the Treatise are meant Pyrrhonically like the arguments in Sextus's Outlines, as temporary means to dethrone reason, ultimately both the attacker and the attacked becoming by degrees weaker.  According to this reading Hume is just an updated empiricist flavoured version of Sextus - with the important difference that Hume, as we saw above, has no liking for amphibolisms. However part II, which appeals to 'nature' as a surrogate for reason to determine the existence of bodies, marks perhaps the lowest point in the Treatise (while again echoing Carneades and Sextus).  Hume's effort to make his scepticism consistent can only come at the expense of naturalist dogmatism which renders his whole enterprise self-defeating.

Hume was forced to admit that there is a process of abstraction applied even to the most elementary, simple, indecomposable impressions like  colored points. Hume uses in various passages the expression under a certain light.

Suppose that in the extended object, or composition of coloured points, from which we first received the idea of extension, the points were of a purple colour; it follows, that in every repetition of that idea we would not only place the points in the same order with respect to each other, but also bestow on them that precise colour, with which alone we are acquainted. But afterwards having experience of the other colours of violet, green, red, white, black, and of all the different compositions of these, and finding a resemblance in the disposition of coloured points, of which they are composed, we omit the peculiarities of colour, as far as possible, and found an abstract idea merely on that disposition of points, or manner of appearance, in which they agree. Nay even when the resemblance is carryed beyond the objects of one sense, and the impressions of touch are found to be Similar to those of sight in the disposition of their parts; this does not hinder the abstract idea from representing both, upon account of their resemblance. All abstract ideas are really nothing but particular ones, considered in a certain light; but being annexed to general terms, they are able to represent a vast variety, and to comprehend objects, which, as they are alike in some particulars, are in others vastly wide of each other.  (part III, section II)

Finally, Hume has given us one of the most beautiful expressions of subjective idealism in the famous passage (end of section II):

We may observe, that it is universally allowed by philosophers, and is besides pretty obvious of itself, that nothing is ever really present with the mind but its perceptions or impressions and ideas, and that external objects become known to us only by those perceptions they occasion. To hate, to love, to think, to feel, to see; all this is nothing but to perceive. Now since nothing is ever present to the mind but perceptions, and since all ideas are derived from something antecedently present to the mind; it follows, that it is impossible for us so much as to conceive or form an idea of any thing specifically different from ideas and impressions. Let us fix our attention out of ourselves as much as possible: Let us chase our imagination to the heavens, or to the utmost limits of the universe; we never really advance a step beyond ourselves, nor can conceive any kind of existence, but those perceptions, which have appeared in that narrow compass. This is the universe of the imagination, nor have we any idea but what is there produced. 

Hume's treatment of the self seems to be a distorted version of the abhidhamma theory of anatta: The Possibility of Oriental Influence in Hume's Philosophy. This is interesting because it has been argued for Buddhist affinities both to Pyrrhonism and Hume.

See also Gaston Berger's Hume et Husserl.

Thursday, January 16, 2025

Projects

1. Extended Second-Order Logic as a general logic for philosophy (and the generalized epsilon calculus as well as connection to type theory and linear logic). An important aspect of ESOL is that it provides the technical framework for (intensional) anti-extensionalist and anti-inferentialist theories in the philosophy of logic, something which is important for anti-functionalist arguments.

2.  Universal phenomenology. This involves the synthesis of the great currents of classical philosophy: pyrrhonism, stoicism and (neo)platonism. And the integration between the ancient and the modern (sp. Descartes, Hume, Kant, Hegel, Frege, Brentano), east and west. For eastern philosophy we focus on the original philosophy in the Pali Nikayas as well as Yogâcâra and Vedânta.

The guiding idea is the possibility of consciousness to step outside itself and become integrally and clearly aware of itself: transcendental self-transparency. This also can function as a powerful psychotherapy and path of self-development and self-improvement.

We propose that it makes sense to speak of a phenomenological method in Hegel (but this must be defined and explained carefully, for instance how it differs from Husserl's or Hume's method) and that much of Hegel's Logic can be interpreted as a phenomenological analysis of classical logical, epistemological and metaphysical (as well as scientific) concepts (mainly in their Kantian presentation) - 'common' notions that we all use and know but which we have not inquired into with all possible clarity and depth. The phenomenological method becomes ultimately self-conscious,  self-referential and self-encompassing both in its goal, essence and process.  For instance in the Encyclopedia Logic consider Hegel's analysis of teleology in the section on Object: this is a clearly a phenomenological analysis. Hegel's phenomenological method includes a kind of advanced systems theory in which consciousness and self-reference (as well as self-modification and self-production) play a key role.

In the words of Hegel himself in the Encyclopedia Logic:

In other words, every man, when he thinks and considers his thoughts, will discover by the experience of his consciousness that they possess the character of universality as well as the other aspects of thought to be afterwards enumerated. We assume of course that his powers of attention and abstraction have undergone a previous training, enabling him to observe correctly the evidence of his consciousness and his conceptions.

The great illusion of modern phenomenology is that somehow ordinary consciousness is self-transparent from a first-person perspective or that such self-transparency can be obtained by ordinary philosophical reflection or study (though the situation varies according to individual disposition and talent).  Rather it is necessary for ordinary consciousness to totally step out outside itself to be able to know itself purely and objectively, only then is phenomenology possible. This is the deeper significance and value of an anti-psychologism such as Frege's.

Ordinary consciousness is based on forgetfulness of its a priori conditioning determining factors, for instance, temporality. But we must go deeper and inquire into what is even more forgotten: the 'self', the 'knower' and the 'agent'.

Works like Aristotle's De Anima and several key treatises of Plotinus can be seen as establishing the foundations of an authentic phenomenology. The Vedanta school as well as the rival but intimately connected Yogacara school likewise.

We view the above foundations as important elements in anti-physicalist and anti-functionalist arguments.

3. On causality, computability and the mathematical models of nature, including Hegel and Modern Topology. This is also relevant to anti-Quinean arguments in the philosophy of science.

4. Biology from an abstract point of view: take standard material from textbooks and reformulate it from a very abstract mathematical point of view to lay bare conceptual symmetries, connections and new theoretical perspectives.

5. Study the historical traditions and engage in an active defense of an ethics founded on universal human and animal rights and universal compassion. Promote critical awareness of unquestioned social values and assumptions regarding procreation.

Schopenhauer had at once great merit and great weakness. His version of Kantian idealism is very poor stuff and his criticism of his contemporaries does not involve any serious engagement. However this does not affect the profound insights of some fundamental aspects of his animal rights and compassion based ethics (quite compatible, in fact, with Kant) together with his interesting Platonic theory of aesthetics and art and a kind of Goethean biology.

(and continue paper on Analyticity and the A Priori)

Wednesday, January 8, 2025

Brentano's phenomenological idealism

Moreover, inner perception is not merely the only kind of perception which is immediately evident; it is really the only perception in the strict sense of the word. As we have seen, the phenomena of the so-called external perception cannot be proved true and real even by means of indirect demonstration. For this reason, anyone who in good faith has taken them for what they seem to be is being misled by the manner in which the phenomena are connected. Therefore, strictly speaking, so-called external perception is not perception. Mental phenomena, therefore, may be described as the only phenomena of which perception in the strict sense of the word is possible.

It is not correct, therefore, to say that the assumption that there exists a physical phenomenon outside the mind which is just as real as those which we find intentionally in us, implies a contradiction. It is only that, when we compare one with the other we discover conflicts which clearly show that no real existence corresponds to the intentional existence in this case. And even if this applies only to the realm of our own experience, we will nevertheless make no mistake if in general we deny to physical phenomena any existence other than intentional existence.

Franz Brentano, Psychology from an Empirical Point of View (1874)

Systems theory

To construct a model of reality we must consider what are to be considered the basic elements. Postulating such elements is necessary even if they are seen as provisory or only approximative, to be analysed in terms of a more refined set of basic elements.  A very general scheme for models involves distinguishing between time T and the possible states of reality S at a given time t. T is the set of possible moments of time. Thus our model is concerned with the Cartesian product S×T. In modern physics we would require a more complex scheme in which T would be associated with a particular observer. It is our task to decompose or express elements of S in terms of a set of basic elements E and to use such a decomposition to study their temporal evolution.

The most general aspect of T is that it is endowed with an order of temporal precedence which is transitive. We may leave open question whether T with this order is linear (such as in the usual model of the real numbers) or branching. The most fundamental question regarding T concerns the density properties of . Is time ultimately discrete (as might be suggested by quantum theory) or is it dense (between two instants we can always find a third) or does it satisfy some other property (such as the standard ordering of ordinals in set theory) ? The way we answer this question has profound consequences on our concept of determinism.

For a discrete time T we have a computational concept of determinism which we call strong determinism. Let t be a given instant of time and t be the moment tt immediately after t. Then given the state s of the universe at time t we should be able to compute the state s at time t. If this transition function (called the state transition function) is not computable how can we still have determinism regarding certain properties of s which we call weak determinism. Stochastic models also offer a weak form of determinism although a rigorous formalization of this may be quite involved. A very weak statement of determinism would be simply postulating the non-branching nature of T.

We can also consider a determinism which involves not the state in the previous time but the entire past history of states and having an algorithm which determines not only the next state but the states for a fixed number of subsequent moments. For instance the procedure would analyze the past history and determine which short patterns most frequently occurred and then yield as output one of these which the system would then repeat as if by "habit".

The postulate of memory says that the all the necessary information  about the past history is somehow codified in the state of the system in the previous time. For a dense time T it is more difficult to elaborate a formal concept of determinism. In this case strong determinism is formulated as follows: given a t and a state s of the universe at t and a tt which is in some sense sufficiently close to t we can compute the state s at t. Models based on the real numbers such as the various types of differential equations are problematic in two ways. First, obtaining strong determinism, even locally, is problematic and will depend on having solutions given by convergent power series expansions with computable coeficients or on numerical approximation methods. Secondly, differential models are clearly only continuum-based approximations (idealisations) of more complex real systems having many aspects which are actually discrete. The determinism of differential models can be thus seen as based on an approximation of an approximation.

We now consider the states of the universe S. The most basic distinction that can be made is that  between a substrate E and a space of qualities Q . There is also an alternative approach such as the one of Takahara et al. based on the black box model in which for each system we consider the cartesian product X×Y of inputs X and outputs Y. In this model we are lead to derive the concept of internal state as well as that of the combination of various different systems. We can easily represent this scenario in our model by simulating the input and output signalling mechanism associated to a certain subset of E. States of the universe are given by functions ϕ:E×TQ. We will see later that in fact it is quite natural to replace such a function by the more general mathematical structure of a "functor". To understand ϕ we must consider the two fundamental alternatives for E: the Lagrangian and Eulerian approaches (these terms are borrowed from fluid mechanics).

In the Lagrangian approach the elements of E represent different entities and beings whilst in the Eulerian approach they represent different regions of space or some medium - such as mental or semantic space. This can be for instance points or small regions in standard Euclidean space. The difficulty with the Lagrangian approach is that our choice of the individual entities depends on the context and scale and in any case we have to deal with the problem of beings merging or becoming connected , coming to be or disappearing or the indiscernabiliy problem in quantum field theory. The Eulerian approach besides being more natural for physics is also very convenient in biochemistry and cellular biology where we wish to keep track of individual biomolecules or cells or nuclei of the brain. In computer science the Lagrangian approach could be seen in taking as basic elements the objects in an object-oriented programming language while the Eulerian approach would consider the variation in time of the content of a specific memory array.

We call the elements of E cells and ϕ:E×TQ the state function. For now we do not say anything about the nature of Q. In the Eulerian approach E is endowed with a fundamental bordering or adjacency relation which is not reflexive, that is, a cell is not adjacent to itself. The only axiom we postulate is that is symmetric and each cell must have at least one adjacent cell. We have that induces a graph structure on E. This graph may or not be planar, spatial or embeddable in n-dimensional space for some n.

We can impose a condition making E locally homogeneous in such a way that each eE has the same number of uniquely identified neighbours. For the case of discrete T, the condition of local causality states that if we are in a deterministic scenario and at time t we have cell e with ϕ(e)=q then the procedure for determining ϕ(e) at the next instance t will only need the information regarding the value of ϕ for e and its adjacent cells at the previous instant. Many variations of this definition are possible in which adjacent cells of adjacent cells may also be included. This axiom is seen clearly in the methods of numerical integration of partial differential equations.

Now suppose that T is discrete and that E is locally homogeneous and that we indicate the neighbours of a cell e by e1e1,e2e2,...eiei. Then the condition forhomogenous local causality can be expressed as follows. For any time t and cells e and e such that ϕ(e,t)=ϕ(e,t) and ϕ(fi,t)=ϕ(fi,t) ,where fi and fi are the corresponding neighbours of e and e, we have that ϕ(e,t)=ϕ(e,t) where t is the instant after t.

An example in the conditions of the above definition is that of a propagating symbol according to a direction j. If a cell e is in state on and cell e such that eje is in state off then in the next instant e is in state off and e is in state on. Stochastic processes such as diffusion can easily be expressed in our model.

A major problem in the Eulerian approach is to define the notion of identity of a complex being. For instance how biological structures persist in their identity. despite the constant flux and exchange of matter, energy and information with their environment.

We clearly must have a nest hierarchy of levels of abstraction and levels approximation and this calls for a theory of approximation. Some kind of metric and topology on E, T and the functional space of functions ϕ is necessary. Note that all the previous concepts carry over directly to the Lagrangian approach as well. In this approach a major problem involves formalising the way in which cells can combine with each other to form more complex being. If we consider the example of biochemistry then we see that complex beings made up from many cells have to be treated as units well and that their will have their own quality space Q which will contain elements not possible to be realise by a single eE. This suggests that we need to add a new relation on E to account for the joining and combination of cells and to generalise the definition of ϕ:E×TQ.

We take the Lagrangian approach. We now add a junction relation J on E. When eJe then e and e are to be seen as forming an irreducible being whose state cannot be decomposed in terms of the states of e and e. The state transition function must not only take into account all the neighbours of a cell e but all the cells that are joined to any of these neighbours.

Let J be the transitive closure of J. Let EJ denote the set of subsets of E such that for each SE we have that if e,eS then eJe. Inclusion induces a partial order on E. Instead of Q we consider a set Q of different quality spaces Q,Q, Q,...which represent the states of different possible combinations of cells. Let us assume that Q represents as previously the states for single cells. For instance a combination of three cells will have states which will not be found in the combination of two cells or a single cell. Suppose e and joined to e and the conglomerate has state qQ. Then we can consider e and e individually and there is function which restricts q to states q1 and q2 of e and e. In category theory there is an elegant way to combine all this information: the notion of presheaf. To define the state functions for a given time t we must consider a presheaf:
ΦJ:EJopQ
The state of the universe at given instant will be given by compatible sections of this presheaf. To define this we need to consider the category of elements El(Q) associated to Q whose objects consists of pairs (Q,a) where aQ and morphisms f:(Q,a)(Q,a) are maps f:QQ which preserve the second components f(a)=a. Thus a state function at a given time is given by a functor:
ϕJ:EJEl(Q)
But J can vary in time and we need a state transition function for J itself which will clearly also depend on ϕJ for the previous moment. Thus the transition function will involve a functor:
JJ:hom(EJ,El(Q))Rel(E)
and will yield a functor
ϕJJ(ϕJ):EJJ(ϕJ)El(Q)
Note that we could also consider a functor
E:Rel(E)Pos
which associates EJ to each J.

The relation J is the basic form of junction. We can use it to define higher-level complex concepts of connectivity such as that which  connects various regions of biological systems. We might define living systems as those systems that are essentially connected. These can be defined as systems in which the removal of any part results necessarily in the loss of some connection between two other parts. This can be given an abstract graph-theoretic formulation which poses interesting non-trivial questions. Finally we believe this model can be an adequate framework to study self-replicating systems.

Sunday, December 22, 2024

Some topics in the philosophy of nature

The relationship between the concepts of determinism, predetermination, computability, cardinality, causality and the foundations of the calculus. To study this we need a mathematical general systems theory, hopefully general enough for this investigation. 

It is clear that 'determinism' is a very complex and ambiguous term and that it only has been given rigorous sense in the case of systems equivalent to Turing machines which are a case of finite or countably infinite systems.  Note that there are finite or countably infinite systems which are not computable and hence not deterministic in the ordinary sense of this term. Thus this sense of determinism implies computability which in turn implies that to determine  the evolution of the system we need consider only a finite amount of information involving present or past states. And we should ask how the even more complex concept of 'causality' comes in here. What are we to make of the concept of causality defined in terms of such computable determinism ? Note that a system can be considered deterministic in a metaphysical sense without being in fact computable.

A fundamental problem is understanding the role of differential (and integral) equations in natural science and the philosophy of nature.  The key aspect here is:  being an uncountable model and the expression of causality in a way distinct from the computational deterministic model above.  Note the paradox: on one hand 'numerical methods' are discrete, computable, deterministic approximations of differential models. One the other hand the differential models used in science are clearly obtained as approximations and idealizations of nature, for instance in the use of the Navier-Stokes equations which discards the molecular structure of fluids.

One problem is to understand the causality and determinism expressed in differential models in terms of non-standard paradigms of computation beyond the Turing limit. One kind of hypercomputational system can be defined as carrying out a countably infinite number of computational steps in a finite time.

For a mathematical general systems theory we have considered two fundamental kinds of systems: these are transpositions to generalized cellular automata/neural networks  of the Eulerian and Lagrangian approaches  to fluid mechanics.  It is clearly of interest to consider non-countable and hypercomputational versions of such general cellular automata: to be able to express differential models in a different way and to generalize them by discarding the condition of topological locality (already found in integral-differential equations and the convolution operation, Green's function, etc.).

The deep unsolved problems regarding the continuum are involved here as well as their intimate connection to the concepts of determinism, causality, computability and the possibility of applying differential models to nature. 

A special case of this problem involves a deeper understanding of all the categories of functions deployed in modern analysis: continuous, smooth, with compact support, bounded variation, analytic, semi- and sub-analytic, measurable,  Lp, tempered distributions, etc. How can 'determinism' and even computability be envisioned in models based on these categories?

What if nature was ultimately merely measurable rather than continuous ? That is, the temporal evolution of the states of systems modeled as a function ϕ:TS must involve some kind of merely measurable map ϕ ? Our only 'causality' or 'determinism' then must involve generalized derivatives in the sense of distributions. And yet the system can  still be deterministic in the metaphysical sense and even hypercomputational in some relevant sense. Or maybe such maps are generated by sections of underlying deterministic continuous processes ? 

General determinism and weak causality involve the postulating of properties of the evolution of the system which may not be logically or computationally sufficient to predict the evolution of the system in practice. This is similar to the situation in which given a recursive axiomatic-deductive system we cannot know in practice if a given sentence can be derived or not. Also constructions like the generalized derivative of locally integrable functions involve the discarding a much information.

For quantum theory: actual position and momentum are given by non-continuous measurable functions over space-time (we leave open the question of particle or wave representations). The non-continuity implies non-locality which renders, perhaps, the so-called 'uncertainty principle' more intelligible. The wave-function ψ is already a kind of distribution or approximation containing probabilistic information. Quantum theory is flawed because the actual system contains more information than is embodied in the typical wave-function model - a situation analogous to the way in which the generalized derivative involves discarding information about the function.

Uncertainty, indeterminism, non-computability are a reflection thus not of nature itself but of our tools and model-theoretic assumptions. In the same way it may well be that it is not logic or mathematics that are 'incomplete' or 'undecidable' but only a certain paradigm or tool-set that we happen to choose to employ.

Another topic: the study of nature involves hierarchies of models which express different degrees and modes of approximation or ontological idealization - but forcefully ordered in a coherent way. Clearly the indeterminism or problems of a given model at a given level arise precisely from this situation; small discrepancies at a lower level which have been swept under the rug can in the long-run have drastic repercussions on higher-level models, even if most of the times they can be considered negligeable. And we must be prepared to envision the possibility that such hierarchies are imposed by the nature of our rationality itself as well as by experimental conditions - and that the levels may be infinite.

Computation, proof, determinism, causality - these are all connected to temporality, with the topology and linear order of time and a major problem involves the uncountable nature of this continuum.

In mathematical physics we generally have an at least continuous function from an interval of time into Euclidean space, configuration space or a space-time manifold. This to describe a particle or system of particles. More generally we have fields (sections of finite dimensional bundles) defined on such space which are in general at least continuous, often locally smooth or analytic.  This can be generalized to distributions, to fields of operators or even operator valued distributions.  But what if we considered, at a fundamental level,  movements and fields which were merely measurable and not continuous (or only section-wise continuous) ? Measurable and yet still deterministic. Does this even make sense ? At first glance 'physics' would no longer make sense as there would no longer be any locality or differential laws. But there still could be a distribution version of physics and a version of physics over integrals. If the motion of a particle is now an only measurable (or locally integrable) function ϕ:TR3. Consider a free particle. In classical physics if we know the position and momentum at a given time then we know the position (and momentum) at any given time (uniform linear movement). But there is no canonical choice for a non-continuous function. Given a measurable functions f:TR3 we can integrate and define a probability density ρ:R3P which determines how frequently the graph of f intersects a small neighbourhood of a point x. But what are we to make of a temporally evolving ρ (we could consider a rapid time, at the Planck scale and a slow time) ?

Tentative definition of the density function:

ρf(x)=limxUμ(f1U)m(U)

where μ is a Borel measure on T and m the Lebesgue measure on R3. Question: given a continuous function g:RnK where K is either the real of complex functions and a  (signed) Borel measure μ on TR is there a canonical measurable non-continuous functions f:TRn such that ρf=g ? It would seem not.  Any choice among possible 'random' candidates implies extra information.  And we need to make sense of this question for continuous families gt of continuous functions, for example g(t)=eiπt. The differential laws of gt might need to be seen as finite approximations.

Define real computable process. 

Another approach: we have a measurable map f:TA×B. Suppose that we know only fA(t0) and not fB(t0) while the knowledge of both would be theoretically enough to compute f(t) for t>t0.  Then given a UA×B we can take the measure of the set VB such that if fB(t0)V then f(t)U.  

If a trajectory is measurable and not continuous, does velocity or momentum even make sense ? 

For f:TR3 measurable (representing the free movement of a single particle) we can define for each IT, ρI(x)=limxUμ(f1U)Im(U) which can be thought of as a generalized momentum but where causality and temporal order are left behind. Thus we could assign to each open interval IT a density function ρI:R3K. We can then postulate that the variation of the ρI with I is continuous  in the sense that given a ϵ we can find a δ such that for any partition Ii of T with d(Ii)<δ we have that ||ρIi+1ρIi||<ϵ for some suitable norm.

This construction can be repeated if we consider hidden state variables for the particle, that is f:TR3×H for some state-space H. Of course we cannot in practise measure H at a given instant of time for a given particle.  Note also that if we have two measurable maps then indiscernibility follows immediately - individuation is tied to continuity of trajectories.

Space-time is like an fluctuating ether which induces a Brownian-like motion of particle - except not continuous at all only measurable. Maybe it is H that is responsible for directing the particle (a vortex in the ether) and making in behave classically is the sense of densities.  

A density function (for a small time interval) moving like a wave makes little physical sense. Why would the particle jump about in its merely measurable trajectory and yet have such a smooth deterministic density function ? It is tempting to interpret the density function as manifesting some kind of potential - like a pilot wave.

The heat equation t=kx2 represents a kind of evening out of a function f, valleys f>0 are raised and hills f<0 are leveled. But heat is a stochastic process. Maybe this provides a clue to understand the above - except in this case there is only one very rapid particle playing the role of all the molecules. 

Another approach: given a continuous function in a region ϕ:UR+ construct an nowhere continuous function τ:TU such that ϕ is the density of ϕ in T. This is the atomized field. The Schrödinger equation is an approximation just like the Navier-Stokes equation ignoring the molecular structure of fluids.

Newton' first law of motion expresses minimality and simplicity for the behaviour of a free particle.  We can say likewise that an atomized field if free is completely random spread out uniformly in a region of space. As yet without momentum. Momentum corresponds to a potential which directs and influences the previously truly free atomized field.  Our view is that a genuinely free particle or atomized field is one in which the particle has equal probability of being anywhere (i.e. it does not have cohesion, any cohension must be the effect of a cause). Thus Newton's free particle is not really free but a particle under the influence of a directed momentum field. There are rapid processes which create both the atomized field (particle) and field.

Why should we consider a gravitational field as being induced by a single mass when in reality it only manifests when there are at least two ? 

In Physics there are PDEs which are derived from ODEs of physics at a more fundamental level and there are PDEs that are already irreducibly fundamental.

A fundamental problem in philosophy: the existence of non-well posed problems (in PDEs), even with smooth initial conditions.  This manifests not so much the collapse of the differential model of determinism but the essentially approximative nature of the PDE modelling. Philosophically the numerical approximation methods and the PDEs might be place on equal grounds. They are both approximations of reality.  Even the most simple potential (the gravitational field of a point particle) must have a discontinuity.

Weak solutions of PDEs are in general not unique. Goodbye determinism. The problem with mathematical physics is that it lacks an ontology beyond the simplest kind. It is applied to local homogenous settings - or systems which can be merged together in a straightforward way.  It lacks a serious theory of individuality and interaction - which is seen in the phenomena of shock waves. 

The above considerations on quantum fields are of course very simple and we should address rather the interpretation of quantum fields as (linear) operator valued distributions (over space-time) (see David Tong's lectures on QFT). This involves studying the meaning of distributions and the meaning of  (linear) operator fields - and of course demanding perfect conceptual and mathematical rigour with no brushing infinities under the carpet. And consider how a Lagrangian is defined for these fields involving their derivatives and possibly higher powers (coupling, self-interaction, "particle" creation and annihilation). What does it even mean to assign to each point an operator on a Hilbert space (Fock space) ? How can this be interpreted according to the approach above ?

But we have not even touched upon some of the fundamental elements of physics: Lagrangian densities (why the restrictions on its dependencies ?), the locality of the Lagrangian,  the principle of least action, Noether's theorem, Lorenz invariance,  the energy-momentum tensor. But we consider the distinction between scalar and vector fields to be of the utmost mathematical and philosophical significance. 

And what are some points regarding classical quantum field theory ? The interplay between the Heisenberg and Schrödinger picture in perturbation theory. That our 'fields' now are our canonical coordinates seen as fields of operators. That now we have a whole calculus of operator valued functions (for example apeipx where p is a momentum vector and ap is the corresponding creation operator): PDEs,  integral solutions, Green functions, propagators, amplitudes via scattering matrices, etc.  That the field itself is now beyond space and time, it is not a function of space and time, but recalls rather the Alayavijnana in Yogâcâra philosophy, physics is a excitation via such operator fields of this primordial field (and we will not enter here into a discussion of zero energy and the Casimir effect). 

How do we deploy our approach then to elementary QFT ? Perhaps consider a merely measurable field (not necessarily continuous) MA in which M is space-time and A is some structure over which it is possible to define topologies, σ-algebras and do calculus.

The structure that replaces the real or complex field in the operator calculus might be seen most naturally as a C* algebra.  But operators act on a Hilbert space. So we need to consider that we also have that our C* algebras have representations on a given inner product space F. Thus a field takes space-time points to representations of C* algebras on a space F. Amplitudes 0|ϕ(x)ϕ(y)|0 (here 0 represents the ground state, not the null element of a vector space) are obtained by applying the representation to |0 and then taking the inner product again with |0. More generally this gives amplitudes for certain transitions. The actual state space for a quantum field (with its excitations) is completely non-localized. But could this be given a topological interpretation (without having to go global) for instance, as (co)homology of a bundle ? 

Addendum to our paper Hegel and modern topology:  for physics and systems theory the most developed and relevant sections in the Logic are found in the section on Object (mechanism, chemism and teology) and the first parts of the section of Idea (with regards to process and life). For the relevance to topology and geometry this is a good point of entry, with special focus on self-similarity (maybe in quantum field theory ?) and goal-oriented systems. The final parts of Idea are clearly about self-reference, self-knowledge, self-reproduction, self-production and clearly culminate in a form of abstract theology.  Concept-Idea is both essentially self-referential (it is knowledge knowing itself, and knowledge that is being and being that is knowledge, and also process and goal)  and self-productive as well as generative, in the sense that it generates or emanates nature. 

What might be the deeper philosophical significance of the delta function in physics, the fact that its Fourier transform (in the sense of Fourier transforms of distributions) is a constant function ? It seems to have something to do with the correspondence between space and time.

The following is certainly very relevant to our discussion on determinism, causality and differential vs. measurable models.

1. Can thermodynamics be deduced mathematically from a more fundamental physical theory ?

2. Could we consistently posit a fifth force in nature which manifests in a decrease in entropy ?

The problem here is that many ordered states can be conceived as evolving as usual into the same more highly disordered state.  This can even be approached by attempting to give an underlying deterministic account (as in the kinetic theory of gases).  Thus thermodynamics just gives general dynamical systems results that apply to the specific systems of nature.

But if a new force manifested in the sense of decreasing entropy, then a reconciliation with determinism would be more problematic: from a single chaotic state there is a multitude of ordered states which it could (apparently, at least) consistently evolve to (emerge). Thus there seems to be some kind of choice, freedom, collapse of the wave function like process.

Perhaps in nature there is a conservation law involving entropy and anti-entropy.  Life is a manifestation of the anti-entropy which balances out the usual entropy increase in physics.

Like consciousness, causality, determinism, proof, number and computation - entropy is intimately connected to the direction and linear flow of time. 

The big question is: are there finitary computational or differential deterministic processes which have entropy-decreasing behavior (do they evolve complex behaviour, self-reference, etc.) ?  We would say that this seems indeed not to be the case. Thus we need to move on to: infinitary (hyper) computational systems and to deterministic but not necessarily continuous systems. There is indeed a connection between the problems of the origin and nature of life and Gödel's incompleteness theorems and the problems in quantum field theory.

Differential models and their numerical approximation methods are some of the most successful and audacious applications and apparent extensions of our finitary computable paradigm. But they cannot explain or encompass everything. 

If we postulated that space and time were discrete, then there could be no uniform linear motion, for instance going two space units in 3 time units. At the second time unit the particle  could claim equally to be in 1st or 2nd space cell - hence a situation of uncertainty.  For more complex movement the uncertainty distribution can be greater. 

The Hydrogen atom: examine carefully the methods of attaining the solutions of the Schrödinger equation in this instance and see if the solutions (involving spherical harmonics) can be given other physical interpretations (of the electron 'clouds') along the lines of our proposal.

What we need to do: elaborate the mathematical theory of how a free "quantum" particle (i.e. a particle with a completely discontinuous random trajectory in space) comes under the influence of a classical potential.  

Since we cannot write a differential equation for the non-continuous trajectory our determinism must be defined by a differential equation on the probability density (as explained above).  Take a potential and the laplace equation. Physically if the 'particle' is influenced by the potential then the totally dispersed 'cloud' will be attracted and reshaped according to the solutions of the equation.

We don't know what meaning is

Gödel, criticizing a paper by Turing, remarked on how 'concepts'  are grasped by the mind in different ways, that certain concepts c...