Sunday, March 30, 2025

Three metaphilosophies

We have proposed three metaphilosophies.  Phenomenological metaphilosophy involves understanding the timeless and universal principles of the phenomenological method and program which are found across a great variety of different philosophical systems, times and places.  Formal metaphilosophy takes a highly skeptical view of common philosophical practice with a focus on the logical and linguistic aspects and proposes a methodology based on axiomatic-deductive systems and rigorous definition of all concepts involved in all philosophical arguments and debates. Critical metaphilosophy (inspired by Frege, the early Husserl, Gödel, Gellner (backed by Russell), Mundle, Preston, Findlay, A. Wierzbicka, J. Fodor, C. Ortiz Hill, Rosado Haddock and Unger, John W. Cook and also some considerations of Marcuse) questions the value of much of 20th and 21st century philosophy from a predominantly logical and linguistic point of view as well paying great attention to presupposed or insinuated materialist hypotheses found therein (Preston and Unger should have engaged in exhibiting substantial textual evidence for  'scientiphicalism').  Every accusation is a confession and if linguistic philosophy/ordinary language philosophy is patently bad philosophy, the kind of condemnations it engaged in towards previous philosophy prophetically turned out to apply remarkable well to itself. And indeed this linguistic philosophy never ceased to be an underlying powerful force until today despite its various disguises and apparently sanitized versions, including analytic metaphysics. A true scientific linguistics deployed in a critical metaphilosophical way  is what is called for - the study of the psychology, sociology and linguistics of professional philosophy/sophistry. Even in non-orthodox philosophers in this tradition (Robert Hanna, George Bealer)  we find a strong presence of many of its assumptions and rhetorical-argumentative patterns.

We do not loose sight of the hard problems and limitations involved both in phenomenological and formal metaphilosophy.  

Why use the term phenomenology rather than psychology or introspective psychology.  For the greatest of problems involves what is most primordially given. And how can truth be found or based on anything but this ? The goal of philosophy is to see fully, to know fully, it is self-transparency and liberation.  Locke, Berkeley and Hume dealt with the deepest and most fundamental, most fertile of all questions. They looked in the right direction and had the right perspective. The great question: what is a concept ? Without concepts there is no logic, no language, no reason, no knowledge.  Can we admit knowledge without any conceptuality ? Or mind without conceptuality ? Certainly ordinary knowledge involves concepts. Even asking about knowledge and truth, are we not asking about concepts ? Is not truth a concept ? Is not knowledge a concept ? And do we have a concept of a concept even it is an unclear, vague, definition-lacking concept ?

And what about ethics, specially an ethics based on compassion ? Schopenhauer, as we mentioned before, offers us a purely phenomenological ethics based on compassion which thus would appear to have a non-conceptual anchor.

If philosophy is foremost a quest for individual clarity and knowledge regarding one's own consciousness, how can we express the truth we find to others ?  How can we argue, how can we persuade ? What are the rules which must govern or direct this argument or persuasion ? There are no arguments without concepts. If we do not know what a concept is, we do not know what an argument is. We have a concept of concept yet this concept is not an adequate concept. We can know things and yet not know how to define them. Sentences express concepts (they can be nominalized) just as adjectives, adverbs, nouns, verbs, pronouns, etc.  And concepts are not vague. It is difficult to find two different words (from hundreds of thousands) which have exactly the same meaning. If meaning boundaries were fluid we would not expect this to happen.

Without concepts there is no language. It is erroneous and foolish to go about theories of language and profess to talk about the mind without first venturing into the vast realm of the philosophy of concepts. 

Our stream of consciousness is not a stream of sensations or recollected images of simple sensations but includes a stream of concepts (in-consciousness concepts, not Fregean concepts obviously).

As a temporary remedy for this state of affairs we propose formal philosophy, carrying out philosophical arguments in an entirely mathematical fashion.

Husserl's Logical Investigations is a great textbook in philosophy, a kind of summa of the best psychological introspective, logical, linguistic and ontological work of the 19th and 18th centuries.  Likewise Frege is a model of clarity and elegance - regardless of one's views.

The danger of philosophical introspective (and transformative) psychology is turning into mere psychotherapy or psychoanalysis or becoming uncritically influenced by occultism and religion. Equally harmful is naturalism and neuro-reductionism and behaviorism and the dogmas of 'linguistic philosophy' or 'ordinary language philosophy'. Speech acts and language games are still abstracted, isolated, analyzed and understood conceptually.

The dilemma here seems to be between staying safely at the periphery or venturing to where lurks the great danger of religion, occultism and cults.  Philosophy is indeed a psychotherapy which aims heroically to overcome the deep ingrained conditioning of religion and materialism alike (cf. Gödel's statement: religion for the masses, materialism for the intellectuals). There are no royal roads or shortcuts in philosophy.  See this essay by Tragasser and van Atten on Gödel, Brouwer and the Common Core thesis. Gödel's theory, as recounted by the authors, is of utmost significance. Gödel was promoting the restoration of the authentic meaning of Plato's dialectics and the role of mathematics expounded in the Republic and other texts.  Perhaps Gödel has pointed out the best path (at once philosophical and self-developmental) (for so called "Western man" ) which avoids the double pitfall of materialism and religion/psychotherapy/occultism. In the 21st century (inheriting from the 20th century) we are inundated by the cult of the irrational, by anti-rationalism in every conceivable and subtle and insidious form. The "rational" is only allowed to thrive in its most miserable, limited and adulterated form, harnessed and enslaved to the lowest materialistic/technological/economical/military goals.  And the technological and economical goals here do not even aim at the common good and equal and fair distribution of the earth's resources.

And here is what is remarkable about the Platonic-Gödelian method: the confluence between pure mathematical thought and introspective transformative philosophical psychology.  But this project can be discerned in Husserl's Logical Investigations and Claire Ortiz Hill has written extensively about the objective, formal and logicãl aspect of this work, in particular the important connection to Hilbert's lesser known philosophical thought.  However the psychological and phenomenological aspect is just as important, just not in the way of the later Husserl, rather in the Platonic-Gödelian and transformative philosophical psychological way.

The epokhê as Husserl outlined is not possible (and even less is the Heidegger alternative valid), rather such a clarity and 'transcendental experience'  is possible through the Platonic-Gödelian method.

George Bealer (1944-2025)

 https://dailynous.com/2025/01/22/george-bealer-1944-2025/

Friday, March 28, 2025

Fundamental problem in the philosophy of logic

The fundamental problem in the philosophy of logic is understanding the nature and meaning of formal logic, that is,  so-called mathematical or symbolic logic.

The key notion involved is that of self-representation and self-reflection.

We have informal but rigorous proofs concerning abstract axiomatic systems. Then we have abstract axiomatic systems representing reasoning and proof concerned with abstract axiomatic systems. But then we must prove that a given structure is a proof of a proposition in the same way we prove a proposition in the object axiomatic system. And we require an abstract axiomatic system to reason about proofs in the deductive system - or to prove soundness and consistency.  But how do we prove that what we informally can prove we can also formally prove ?

In order to carry out deductions we must have the concepts of rule and what it means to apply a rule correctly. Likewise we must have the concepts of game and goal. The concept of rule is tied to logic and computability. 

The concept of game includes counting, computing and reasoning.

Kant's question: how is pure mathematics possible ? should not have gone the way of synthetic a priori intuitions but rather to the question: how is formal mathematical proof possible ? That is, how would Leibniz's characteristica be possible ?

Hilbert's treatment of geometry vs. Kant.

Another problem involves the countability of linguistic expressions vs. the possible uncountability of objects.  It follows that there are uncountably many indefinable objects which hence cannot be uniquely identified. Any property they have they must share with other such objects.

We find  the term 'sociologism' very apt to describe the 'linguistic turn'  (meaning-as-use, inferentialism) of Wittgenstein, Ryle, Austin and it continuation in Sellars, Brandom, etc. There is a strict parallelism with the earlier psychologism. It is likewise untenable. It is part of the physicalist assault against the mind, consciousness, individually accessible knowledge and truth (for example a priori moral, logical and mathematical truth) and moral conscience and freedom. It is a pseudo-scepticism and pseudo-relativism/conventionalism  and is ultimately nonsensical. It is reductionism (grabbed from neuroreductionism and functionalism) and is circular.  While sociology is a legitimate scientific discipline, sociologism is not based on science and is bad philosophy.

The idea that meaning of the term 'and' can be given by exhibiting a rule does not appear to be very cogent.

A: What does 'and' mean ?
B. That's simple. IF you postulate a sentence A as being true *AND* a sentence B as being true THEN you can postulate that the sentence "A and B" is true (and vice-versa).
A: I asked for you to define 'and' and you gave me an explanation that uses 'and', 'if...then', 'being true' and the concept of judgment. Sorry, that just won't do ! 

 It is also obvious that A may be possible to infer from B but that a person that accepts A is not sociologically obliged in anyway to state or defend B, for example, Fermat's last theorem before its proof by Wiles.  Any adequate language for fully describing the full range of sociological behavior, norms and practices is at least Turing complete.  So appeals to sociology cannot be used to furnish foundations for either logic or language.

Sociologism stands Frege on his head. It is a transposition to the social plane of the false dogma of functionalism and behaviourism.

Given a sentence S we can consider the recursively enumerable (but not recursive) set I(S) of all sentence which can be inferred from S in a system T.  Clearly I(S) cannot count as the meaning of S. Elementary number theory abounds in statements involving only elementary concepts the truth and inferentiability of which is not known.

Recommended reading: C. W. Mundle - A Critique of Linguistic Philosophy (Oxford, 1970).

Another strand of linguistic philosophy which seeks to undermine the certainty, clarity, objectivity and a priority of knowledge has roots in the later Wittgenstein's theories of polymorphism and his assault on definitions and meanings (but see the discussion in the Theatetus). In its current form it revolves around what we call 'the cult of vagueness'.

The cult of vagueness attempts to undermine the clarity, precision and non-ambiguity of language, and most importantly the language of philosophy, ethics, psychology - not to mention logic, mathematics and science.  Two of its sources are the  'paradoxes' and obvious peculiarities of certain natural language elements, specially the more homely and down-to-earth terms like 'bald' and 'cup' - there is nothing strange about certain adjectives having a trifold decomposition.  Of course to do this it has to assume a certain doctrine about language and its relation to the mind and the world.

The meaning of a property can be crystal clear and yet the application of the property can be difficult and uncertain. And it is only uncertain because the meaning is clear.

The cult of vagueness has its own peculiar rhetorical style which involves never stating one's assumptions clearly but only insinuating them.  

Erroneous theory of 'semantic relations' including 'speech acts' like 'whispering'.   What do they mean by act (and old Aristotelean metaphysical concept)  ? And whispering is a quality of speech not a semantic relation. For instance 'Mary whispered the nonsense spell she read in the book' has no semantic component. 

Anna Wierzbicka's distinction between folk and scientific concept demolishes the cult of vagueness.  Our low level concepts do not have definitions in the technical sense, they have stories. They are also dynamic and socio-specific.  Thus it is a category mistake to concoct arguments which ignore this distinction.

Linguistics depends on psychology and the philosophy of mind but these last depend on language.

Most adjectives and many nouns are not analogous to mathematical properties such as 'prime number'.  Negation functions differently. Often the adjective property has a tripartite structure, for instance 'tall', 'short' and 'medium height'.  Thus is somebody is not tall is does not mean they are short.  These folk concepts (having the possibility of a fair range of adjectival and adverbial degree modifiers) can give place to scientific ones which generally will involve scale, a measure.  Temperature is measured by different instruments. There is a limit of precision and variations across measurements by different instruments or the same instrument at different times.  But this does not make the concept of temperature vague or ambiguous. In fact statistical concepts are not vague even if as properties they cannot describe the state of a system in a unique way.

We can transpose Gödel's arguments to Zalta's Object Logic.  Instead of numerical coding of formulas we use the encoding relation for properties and objects.  We can thus define predicates for an object codifying only a certain property, only a certain sentence, and only a proof of a certain sentence Proof(p,a) where p is to be seen as codifying a sequence of sentences.  Then we can define Diag(a,b) iff a encodes the proposition Bb where b encodes only property B.  Then we can construct the Gödel sentence by taking the formula G (property) λz.¬x,yProof(x,y)&Diag(z,y) which is encoded by g to construct the Gödel sentence Gg.

Consider a reference relation between expressions and objects. Suppose that there were uncountably infinitely many objects.  Then:

i) either there are objects which cannot be referred to by any definite description

ii) or there are objects which share all their properties with infinitely other objects (indiscernability)

Or infinitely many objects with one binary relation. There are uncountably infinitely many possible states of affairs which cannot thus be referred to in a unique way. The same argument applies.  And of course arguments involving categoricity.

"Speech acts", the vagueness of ordinary terms...this is already found in Husserl's Logical Investigation (see for instance vol II, Book I). And previously in Benno Erdmann. 

Meaning and psychology: the great question.  Consciousness is so much more than the lower sphere of (mainly audio-visual) fantasy and imagination processes.  When we think of the concept of prime number or the concept of 'meaningless sentence'...and of course there is the Fregean view.

Multiplicity of psychological experience in the meaning phenomenon. But we can abstract a type, a species of what is invariable. Husserl is lead from here to ideal objects à la Frege, the space of pure meanings. But in the first Logical Investigations when Husserl discusses the psychological content of abstract expressions, how these are very poor, fluctuating and even totally non-existent and hence cannot be identified with meanings. But Husserl mentions the hypothesis of a rich subconscious psychological content involved. What is going on really when we think of "prime number" ? Do we have a subconscious web of experience reaching back to when we first learnt the concept ? And could not all this ultimately correspond to a kind of formal rule such as : if a divides p then a is 1 or p,  or if a is not 1 or p then a does not divide p ? There is nothing social here or only in the most vague and general way. An extended and rectified Hilbertian view can be seen as depth phenomenology perhaps, specially in light of modern formal mathematics projects.

A priority, certainty, as well as intersubjective agreement - all this depends on recursion theory and arithmetic or its 'deep logic'. Logos is a web of relations which is not relative. 

Meinong's Hume Studies: Part I: Meinong's Nominalism

Meinong's Hume Studies: Part II. Meinong's Analysis of Relations

The deep meaning of Gödel's incompleteness theorem is the mutual inclusion of the triad: logic, arithmetic and recursion theory.

Computability, determinism and analyticity

An overlooked but nevertheless very important problem concerns the role of the differentiable and smooth categories in mathematical physics, that is, the category of maps having continuous up to a certain order or all order. Our question is: why use such a class of maps rather than (real) analytic ones (or semi-analytic) ?  The equations of physics have analytic coeficients.  Known solutions are analytic (for simplicity we do not distinguish analytic from meromorphic).  In fact known solution are analytic having power series representations with computable coeficients (for a standard notion of computability for sequences of real numbers).  And in fact all numerical methods for mathematical physics depend on working in the domain of computable analytic functions.

The class of computable analytic functions is related to the problem of integration of elementary functions. It is also very elegant and simple in itself as it reduces the problems of the foundation of analysis (infinitesimals, non-standard analysis) to the algebra of  (convergent) power series.

Deep results in the theory of smooth maps depend on the theory of several complex variables.

The existence and domain of analytic solutions to analytic equations is an interesting and difficult area of mathematics.  Several results rule out associating analytic equations with any kind of global determinism (in the terminology of Poincaré, solutions in power series diverge). That is if we wish to equate determinism with computability and thus with computable analytic functions in physics. Thus that computable determinism is essentially local is not philosophy but a hard result in mathematics.

An interesting mathematical questions: are there analytic equations which (locally) admit smooth but analytic solutions ? 

Another vexing question: why are there not abundantly more applications of the theory of functions of several complex variables (and complex analytic geometry) to mathematical physics ?

There are objections against the analytic class.  For instance it rules out the test function used in distribution theory or more generally functions with compact support.  Thus we cannot represent a completely localized field or soliton wave (but notice how Newton's law of gravitation posits that a single mass will influence the totality of space).  And yet the most general functions constructed  (like the test function) are often simply the result of gluing together analytic functions along a certain boundary. Most concrete examples of  smooth function but not analytic functions are precisely of this sort. We could call these piecewise analytic maps. Thus additional arguments are required to justify why we have to go beyond piecewise analytic maps.  An obvious objection would be: distributions and weak solutions. But here again we invoke the theory of hyperfunctions.  It seems plausible that there could be a piecewise analytic version of distribution theory (using sheaf cohomology) - even a computable piecewise analytic version.

In another note we investigate other incarnations of computability in mathematical physics (and their possible role in interpreting quantum theory).  Can we consider measurable but not continuous maps which are yet computable ? Together with obvious examples of locally or almost-everyhwere (except on a computable analytic set) computable analytic maps we can seek for examples of nowhere continuous measurable maps which are yet computable (in some adequate sense). The philosophy behind this is the computable determinism may go beyond the differential and analytic category, the equations of physics in this case however only expressing (in a non-exhaustively determining way) measure-theoretic properties of the solutions. 


We end with a discussion of what constitutes exactly a computable real analytic function and how we can define the most interesting and natural classes of such functions. Obvious examples are so-called ’elementary functions’ which have very simple coeficient series in their Taylor expansions. Also it is clearly interesting to study real analytic functions whose coeficient series  are computable in terms of n. And can we decide mechanically when one of these functions is elementary ?
Consider the class of real elementary functions defined on a real interval I. These are real analytic functions. How can we characterise their power series ? That is, what can we say about the series of their coeficients ? For instance there are coeficients an given by rational functions in n , or given by combinations of rational functions and factorials functions, primitive recursive coeficients, coeficients given by recurrence relations, etc. It is easy to give an example of a real analytic function which is not elementary. Just solve the equation x′′ − tx = 0 using power series. This equation is known not to have any non-trivial elementary solution, in fact it has no Liouville solution (indefinite integrals of elementary functions).
Let ELEM be the problem: given a convergent Taylor series, does it represent an elementary function ? Let INT be the problem: given an elementary function does it have a primitive/indefinite integral which is an elementary function ? An observation that can be made is that if ELEM is decidable then so too is INT. Given an elementary function write down its Taylor series and integrate each term. Then apply the decision procedure for ELEM (of course we must be more precise here, this is just the general idea). Thus to show that ELEM is undecidable it suffices to show that INT is.
In the literature there is defined the class holomonic functions which can be characterised
either as:

1) Being solutions of a homogenous linear differential equation with polynomial coeficients.

2) Having Taylor series coeficients given by polynomial recurrence relations.

There is an algorithm to pass between these two presentations. The holonomic class includes elementary functions, the hypergeometric functions, the Bessel function, etc. The question naturally arises: given a sequence of real numbers, is it decidable if they obey a polynomial recurrence relation ?

Tuesday, March 25, 2025

Additions to 'Hegel and modern topology'

The something and the other. In the Science of Logic Hegel thinks of the idea of two things bearing the same relation to each other and indistinguishable in any other way. Now a good illustration of this situation in found in the two possible orientations of a vector space. Each bears the same relation to the other and there is absolutely no way to uniquely identify one of them in distinction to the other.  In the same place Hegel talks about the meaning of proper names and states that they have none !

Hegel's strategy. All of Hegel is based on the dynamics and structure of consciousness as it reveals itself to itself. But equally important it is all based on positing this structure of consciousness  to be essentially objective and not subjective.  Thus 'being a subject' is seen as a stage in the development of the object while mere  'subjectivity' is seen as a failure and partiality of objective consciousness in not living up to its full objectivity or actuality (or knowledge and realization thereof).

Thus ideality and infinity are the key structure of an object which has developed to a point of being a subject. 

Finitude and limit:  a closed set.  Yet the boundary also defines what the set is not (cf. discussion on Heyting algebra of open sets, etc.). Thus the boundary contains implicitly its own negation.

Limitation : an open set (also analytic continuation). Limit outside itself.

The ought... manifestation of the germ of a space through a given open set representation of the equivalence class. Any particular one is insufficient and can be replaced by another which is also insufficient.

False infinity: taking simply the set (or diagram) of such  open set representatives.  True infinity taking the limit (equivalence class) or limit (in the categorical sense)..

Degrees of interpenetration and transparency between the individual and the universal (and between self-consciousness and essence) in the Phenomenology of Spirit.  From the rudimentary form in the ethical life to: the universal is in the individual, the individual in the universal and the universal is in the relation between individuals and the relation between individuals is in the universal (mutual confession and forgiveness).

Tuesday, March 4, 2025

A central problem of philosophy

To us a central problem of philosophy is to elucidate the relationship between the following three domains of (apparent) reality /experience:

1. logic and language

2. mind and consciousness

3. an objective or external world 

While the the relationship between 2 and 3 is a classical topic which has produced an immense literature,  the deeper problem seems to be the relationship between 1 and 2 and 1 and 3.

My question is: how can my anti-inferentialism and anti-anti-representationalism and anti-functionalism be expressed in terms of such relationships ? 

A preliminary and useful questions: what was logic and language for Leibniz, Kant, Fichte, Hegel and Schopenhauer ? What was logic and language for Frege, Brentano and Husserl ? 

Why was not anti-psychologism accompanied by a corresponding anti-physicalism ? 

Another question: how does one's view of the relationship between 2 and 3 condition one's view of the relations 1-2 and 1-3 ? For instance, is the physicalist or idealist somehow conditioned in (or by) their views on logic and language ?

How are we to understand the theory that logic and language are precisely aspects of the self-interaction (self-reflection)  of a universal (super-individual) consciousness when to understand any process we must presuppose logic and language ?

Die Logik dagegen kann keine dieser Formen der Reflexion oder Regeln und Gesetze des Denkens voraussetzen, denn sie machen einen Theil ihres Inhalts selbst aus und haben erst innerhalb ihrer begründet zu werden. Nicht nur aber die Angabe der wissenschaftlichen Methode, sondern auch der Begriff selbst der Wissenschaft überhaupt gehört zu ihrem Inhalte, und zwar macht er ihr letztes Resultat aus; was sie ist, kann sie daher nicht voraussagen, sondern ihre ganze Abhandlung bringt dieß Wissen von ihr selbst erst als ihr Letztes und als ihre Vollendung hervor. (Hegel)

How can we apply logic and language to determine the relationship between logic and language themselves and something which is beyond logic or language ?

Can we develop the theory that a certain super-logical super-linguistic, non-logical and non-logical consciousness and cognition is necessary and useful ?  We have sketched a theory of analyticity based on computability and from this perspective the super-logical can been seen as unfolded in the hierarchy of degrees of hyper-computability. A logical pluralism which yet has nothing conventional or arbitrary about it.

Consciousness, experience, cognition, life...these are (at least potentially) infinitely more vast than abstract conceptual 'thought' in the ordinary sense of the word. We need to see thought as a multilayered structure and process part of a larger enveloping and grounding structure and process...

Also: I do not see any weighty argument against my own contention that the central problem of the philosophy of logic is simply: what is an argument, in particular what is a so-called 'valid' or 'persuasive' argument ? What is a sophistical argument (or a sophistical worldview) ? 

Tuesday, February 18, 2025

Mathematical theory of habit

It does not seem, at first glance, easy to express the concept of habit mathematically, that is, express the concept from a mathematical general systems framework.  Habit is a very difficult notion to grasp as is the way causality enters in.  Image a system S which temporally evolves through a state space Q.  Consider two "small" disjoint regions A and B in Q.   Then we can observe the situation X in which the system in a state in A in a given time T1 and then in B in a subsequent time T2 (were interval [T1,T2] must be smaller than some bound W  passing through a neighbourhood U of a certain path in Q linking A to B.  Now in the evolution of the system S we may observe that behaviour X because more common and frequent as time goes along, as if X develops the "habit" of X.  Is the developing of habit X a special case of X being an attractor in the dynamical-system sense ? Could this evolution of system S be specified by a causal law ?  Not an infinitesimal law but a finite interval (probabilistic) law. If system is in state A then its evolution in  an interval with bound W is determining by the most frequent path starting at A in the past. Its present behavior in situation A is determined by the highest unanimity of the processes of previous A-cases. In a physical sense the repetition of X can be seen as exerting a causal influence in the future, a kind of morphogenetic field. We can also conceive a hierarchical organization of a behavior X decomposable into smaller behaviors X1,...,Xn all subject to the law of habit. There is a kind of self-reference involved, a weak kind of self-determining behavior.

Now things get interesting if we consider the system as spatialized so that the state is assigned not merely to a given time but also to a region (point) in space.  Now we can postulate the frequency of behavior X in a location L1  influences causally the frequency (at a later time) of the same behavior in a distinct location L2. Thus we distinguish between self-inductive habit and propagating habit (cf. solutions to certain PDE  expressed via Green's functions or integral operators). This propagating aspect (perhaps not instantaneous) can be compared for instance to the creation of electromagnetic radiation through periodically moving charges.  Thus local behavior is determined not only by the past habit of that region but the sum-total of the habits of other regions. This is of course a profoundly holistic or "holomorphic" situation.

Friday, January 17, 2025

Hume, the most misunderstood philosopher

We grant that the Treatise may not be a entirely consistent work and that its precise aim may still be quite unclear.  But this does not erase the fact that Hume has suffered historically from being appropriated, perverted and misrepresented by subsequent generations.   Hume has had only few serious readers or quasi-genuine readers such as Kant, T. H. Green, Brentano, Meinong, Husserl and Whitehead.

The problem with Hume is that he does not seem to be able to make up his mind if he is engaging in a radical philosophy in the style of Descartes or in a rational and experimental psychology.

The philosophy of Hume is radically incompatible with subsequent naturalism, so-called empiricism or logical positivism.

The philosophy of Hume is not compatible with the kind of relativism or skepticism exemplified by Sextus Empiricus (whom Hume most certainly read).  On the contrary Hume values highly evidence and rigorous proof.  Consider this beautifully embarrassing passage from part II of section II (Book I):

But here we may observe, that nothing can be more absurd, than this custom of calling a difficulty what pretends to be a demonstration, and endeavouring by that means to elude its force and evidence. It is not in demonstrations as in probabilities, that difficulties can take place, and one argument counter-ballance another, and diminish its authority. A demonstration, if just, admits of no opposite difficulty; and if not just, it is a mere sophism, and consequently can never be a difficulty. It is either irresistible, or has no manner of force. To talk therefore of objections and replies, and ballancing of arguments in such a question as this, is to confess, either that human reason is nothing but a play of words, or that the person himself, who talks so, has not a Capacity equal to such subjects. Demonstrations may be difficult to be comprehended, because of abstractedness of the subject; but can never have such difficulties as will weaken their authority, when once they are comprehended. 

Here Hume is no Pyrrhonist. However it also seems at the end of Book I in the part where he touches upon his personal psychological problems (or rather, on the practical aspect of philosophy) and literary ambitions that Hume is confessing Pyrrhonism or rather a kind of Carneadian scepticism closer to the later Academia (such as was not uncommon at his time): cf. the motifs of living according to nature, being guided by veridical appearance, etc.  (Husserl also agrees with this in the chapters dedicated to Hume in the Krisis). We have already dealt with a refutation of such a stance. And indeed appealing to plausibility or probabilities is fallacious for it leaves open the question of the degree of plausibility of inferences and judgments about plausibility.  Hence long enough chains of probabilistic reasoning with only probable rules will be all improbable. Hume's take on the classical argument against scepticism in part I of section IV might be construed as the definite statement of how Hume juggled these contradictions: the arguments of the Treatise are meant Pyrrhonically like the arguments in Sextus's Outlines, as temporary means to dethrone reason, ultimately both the attacker and the attacked becoming by degrees weaker.  According to this reading Hume is just an updated empiricist flavoured version of Sextus - with the important difference that Hume, as we saw above, has no liking for amphibolisms. However part II, which appeals to 'nature' as a surrogate for reason to determine the existence of bodies, marks perhaps the lowest point in the Treatise (while again echoing Carneades and Sextus).  Hume's effort to make his scepticism consistent can only come at the expense of naturalist dogmatism which renders his whole enterprise self-defeating.

Hume was forced to admit that there is a process of abstraction applied even to the most elementary, simple, indecomposable impressions like  colored points. Hume uses in various passages the expression under a certain light.

Suppose that in the extended object, or composition of coloured points, from which we first received the idea of extension, the points were of a purple colour; it follows, that in every repetition of that idea we would not only place the points in the same order with respect to each other, but also bestow on them that precise colour, with which alone we are acquainted. But afterwards having experience of the other colours of violet, green, red, white, black, and of all the different compositions of these, and finding a resemblance in the disposition of coloured points, of which they are composed, we omit the peculiarities of colour, as far as possible, and found an abstract idea merely on that disposition of points, or manner of appearance, in which they agree. Nay even when the resemblance is carryed beyond the objects of one sense, and the impressions of touch are found to be Similar to those of sight in the disposition of their parts; this does not hinder the abstract idea from representing both, upon account of their resemblance. All abstract ideas are really nothing but particular ones, considered in a certain light; but being annexed to general terms, they are able to represent a vast variety, and to comprehend objects, which, as they are alike in some particulars, are in others vastly wide of each other.  (part III, section II)

Finally, Hume has given us one of the most beautiful expressions of subjective idealism in the famous passage (end of section II):

We may observe, that it is universally allowed by philosophers, and is besides pretty obvious of itself, that nothing is ever really present with the mind but its perceptions or impressions and ideas, and that external objects become known to us only by those perceptions they occasion. To hate, to love, to think, to feel, to see; all this is nothing but to perceive. Now since nothing is ever present to the mind but perceptions, and since all ideas are derived from something antecedently present to the mind; it follows, that it is impossible for us so much as to conceive or form an idea of any thing specifically different from ideas and impressions. Let us fix our attention out of ourselves as much as possible: Let us chase our imagination to the heavens, or to the utmost limits of the universe; we never really advance a step beyond ourselves, nor can conceive any kind of existence, but those perceptions, which have appeared in that narrow compass. This is the universe of the imagination, nor have we any idea but what is there produced. 

Hume's treatment of the self seems to be a distorted version of the abhidhamma theory of anatta: The Possibility of Oriental Influence in Hume's Philosophy. This is interesting because it has been argued for Buddhist affinities both to Pyrrhonism and Hume.

See also Gaston Berger's Hume et Husserl.

Thursday, January 16, 2025

Projects

1. Extended Second-Order Logic as a general logic for philosophy (and the generalized epsilon calculus as well as connection to type theory and linear logic). An important aspect of ESOL is that it provides the technical framework for (intensional) anti-extensionalist and anti-inferentialist theories in the philosophy of logic, something which is important for anti-functionalist arguments.

2.  Universal phenomenology. This involves the synthesis of the great currents of classical philosophy: pyrrhonism, stoicism and (neo)platonism. And the integration between the ancient and the modern (sp. Descartes, Hume, Kant, Hegel, Frege, Brentano), east and west. For eastern philosophy we focus on the original philosophy in the Pali Nikayas as well as Yogâcâra and Vedânta.

The guiding idea is the possibility of consciousness to step outside itself and become integrally and clearly aware of itself: transcendental self-transparency. This also can function as a powerful psychotherapy and path of self-development and self-improvement.

We propose that it makes sense to speak of a phenomenological method in Hegel (but this must be defined and explained carefully, for instance how it differs from Husserl's or Hume's method) and that much of Hegel's Logic can be interpreted as a phenomenological analysis of classical logical, epistemological and metaphysical (as well as scientific) concepts (mainly in their Kantian presentation) - 'common' notions that we all use and know but which we have not inquired into with all possible clarity and depth. The phenomenological method becomes ultimately self-conscious,  self-referential and self-encompassing both in its goal, essence and process.  For instance in the Encyclopedia Logic consider Hegel's analysis of teleology in the section on Object: this is a clearly a phenomenological analysis. Hegel's phenomenological method includes a kind of advanced systems theory in which consciousness and self-reference (as well as self-modification and self-production) play a key role.

In the words of Hegel himself in the Encyclopedia Logic:

In other words, every man, when he thinks and considers his thoughts, will discover by the experience of his consciousness that they possess the character of universality as well as the other aspects of thought to be afterwards enumerated. We assume of course that his powers of attention and abstraction have undergone a previous training, enabling him to observe correctly the evidence of his consciousness and his conceptions.

The great illusion of modern phenomenology is that somehow ordinary consciousness is self-transparent from a first-person perspective or that such self-transparency can be obtained by ordinary philosophical reflection or study (though the situation varies according to individual disposition and talent).  Rather it is necessary for ordinary consciousness to totally step out outside itself to be able to know itself purely and objectively, only then is phenomenology possible. This is the deeper significance and value of an anti-psychologism such as Frege's.

Ordinary consciousness is based on forgetfulness of its a priori conditioning determining factors, for instance, temporality. But we must go deeper and inquire into what is even more forgotten: the 'self', the 'knower' and the 'agent'.

Works like Aristotle's De Anima and several key treatises of Plotinus can be seen as establishing the foundations of an authentic phenomenology. The Vedanta school as well as the rival but intimately connected Yogacara school likewise.

We view the above foundations as important elements in anti-physicalist and anti-functionalist arguments.

3. On causality, computability and the mathematical models of nature, including Hegel and Modern Topology. This is also relevant to anti-Quinean arguments in the philosophy of science.

4. Biology from an abstract point of view: take standard material from textbooks and reformulate it from a very abstract mathematical point of view to lay bare conceptual symmetries, connections and new theoretical perspectives.

5. Study the historical traditions and engage in an active defense of an ethics founded on universal human and animal rights and universal compassion. Promote critical awareness of unquestioned social values and assumptions regarding procreation.

Schopenhauer had at once great merit and great weakness. His version of Kantian idealism is very poor stuff and his criticism of his contemporaries does not involve any serious engagement. However this does not affect the profound insights of some fundamental aspects of his animal rights and compassion based ethics (quite compatible, in fact, with Kant) together with his interesting Platonic theory of aesthetics and art and a kind of Goethean biology.

(and continue paper on Analyticity and the A Priori)

Wednesday, January 8, 2025

Brentano's phenomenological idealism

Moreover, inner perception is not merely the only kind of perception which is immediately evident; it is really the only perception in the strict sense of the word. As we have seen, the phenomena of the so-called external perception cannot be proved true and real even by means of indirect demonstration. For this reason, anyone who in good faith has taken them for what they seem to be is being misled by the manner in which the phenomena are connected. Therefore, strictly speaking, so-called external perception is not perception. Mental phenomena, therefore, may be described as the only phenomena of which perception in the strict sense of the word is possible.

It is not correct, therefore, to say that the assumption that there exists a physical phenomenon outside the mind which is just as real as those which we find intentionally in us, implies a contradiction. It is only that, when we compare one with the other we discover conflicts which clearly show that no real existence corresponds to the intentional existence in this case. And even if this applies only to the realm of our own experience, we will nevertheless make no mistake if in general we deny to physical phenomena any existence other than intentional existence.

Franz Brentano, Psychology from an Empirical Point of View (1874)

Systems theory

To construct a model of reality we must consider what are to be considered the basic elements. Postulating such elements is necessary even if they are seen as provisory or only approximative, to be analysed in terms of a more refined set of basic elements.  A very general scheme for models involves distinguishing between time T and the possible states of reality S at a given time t. T is the set of possible moments of time. Thus our model is concerned with the Cartesian product S×T. In modern physics we would require a more complex scheme in which T would be associated with a particular observer. It is our task to decompose or express elements of S in terms of a set of basic elements E and to use such a decomposition to study their temporal evolution.

The most general aspect of T is that it is endowed with an order of temporal precedence which is transitive. We may leave open question whether T with this order is linear (such as in the usual model of the real numbers) or branching. The most fundamental question regarding T concerns the density properties of . Is time ultimately discrete (as might be suggested by quantum theory) or is it dense (between two instants we can always find a third) or does it satisfy some other property (such as the standard ordering of ordinals in set theory) ? The way we answer this question has profound consequences on our concept of determinism.

For a discrete time T we have a computational concept of determinism which we call strong determinism. Let t be a given instant of time and t be the moment tt immediately after t. Then given the state s of the universe at time t we should be able to compute the state s at time t. If this transition function (called the state transition function) is not computable how can we still have determinism regarding certain properties of s which we call weak determinism. Stochastic models also offer a weak form of determinism although a rigorous formalization of this may be quite involved. A very weak statement of determinism would be simply postulating the non-branching nature of T.

We can also consider a determinism which involves not the state in the previous time but the entire past history of states and having an algorithm which determines not only the next state but the states for a fixed number of subsequent moments. For instance the procedure would analyze the past history and determine which short patterns most frequently occurred and then yield as output one of these which the system would then repeat as if by "habit".

The postulate of memory says that the all the necessary information  about the past history is somehow codified in the state of the system in the previous time. For a dense time T it is more difficult to elaborate a formal concept of determinism. In this case strong determinism is formulated as follows: given a t and a state s of the universe at t and a tt which is in some sense sufficiently close to t we can compute the state s at t. Models based on the real numbers such as the various types of differential equations are problematic in two ways. First, obtaining strong determinism, even locally, is problematic and will depend on having solutions given by convergent power series expansions with computable coeficients or on numerical approximation methods. Secondly, differential models are clearly only continuum-based approximations (idealisations) of more complex real systems having many aspects which are actually discrete. The determinism of differential models can be thus seen as based on an approximation of an approximation.

We now consider the states of the universe S. The most basic distinction that can be made is that  between a substrate E and a space of qualities Q . There is also an alternative approach such as the one of Takahara et al. based on the black box model in which for each system we consider the cartesian product X×Y of inputs X and outputs Y. In this model we are lead to derive the concept of internal state as well as that of the combination of various different systems. We can easily represent this scenario in our model by simulating the input and output signalling mechanism associated to a certain subset of E. States of the universe are given by functions ϕ:E×TQ. We will see later that in fact it is quite natural to replace such a function by the more general mathematical structure of a "functor". To understand ϕ we must consider the two fundamental alternatives for E: the Lagrangian and Eulerian approaches (these terms are borrowed from fluid mechanics).

In the Lagrangian approach the elements of E represent different entities and beings whilst in the Eulerian approach they represent different regions of space or some medium - such as mental or semantic space. This can be for instance points or small regions in standard Euclidean space. The difficulty with the Lagrangian approach is that our choice of the individual entities depends on the context and scale and in any case we have to deal with the problem of beings merging or becoming connected , coming to be or disappearing or the indiscernabiliy problem in quantum field theory. The Eulerian approach besides being more natural for physics is also very convenient in biochemistry and cellular biology where we wish to keep track of individual biomolecules or cells or nuclei of the brain. In computer science the Lagrangian approach could be seen in taking as basic elements the objects in an object-oriented programming language while the Eulerian approach would consider the variation in time of the content of a specific memory array.

We call the elements of E cells and ϕ:E×TQ the state function. For now we do not say anything about the nature of Q. In the Eulerian approach E is endowed with a fundamental bordering or adjacency relation which is not reflexive, that is, a cell is not adjacent to itself. The only axiom we postulate is that is symmetric and each cell must have at least one adjacent cell. We have that induces a graph structure on E. This graph may or not be planar, spatial or embeddable in n-dimensional space for some n.

We can impose a condition making E locally homogeneous in such a way that each eE has the same number of uniquely identified neighbours. For the case of discrete T, the condition of local causality states that if we are in a deterministic scenario and at time t we have cell e with ϕ(e)=q then the procedure for determining ϕ(e) at the next instance t will only need the information regarding the value of ϕ for e and its adjacent cells at the previous instant. Many variations of this definition are possible in which adjacent cells of adjacent cells may also be included. This axiom is seen clearly in the methods of numerical integration of partial differential equations.

Now suppose that T is discrete and that E is locally homogeneous and that we indicate the neighbours of a cell e by e1e1,e2e2,...eiei. Then the condition forhomogenous local causality can be expressed as follows. For any time t and cells e and e such that ϕ(e,t)=ϕ(e,t) and ϕ(fi,t)=ϕ(fi,t) ,where fi and fi are the corresponding neighbours of e and e, we have that ϕ(e,t)=ϕ(e,t) where t is the instant after t.

An example in the conditions of the above definition is that of a propagating symbol according to a direction j. If a cell e is in state on and cell e such that eje is in state off then in the next instant e is in state off and e is in state on. Stochastic processes such as diffusion can easily be expressed in our model.

A major problem in the Eulerian approach is to define the notion of identity of a complex being. For instance how biological structures persist in their identity. despite the constant flux and exchange of matter, energy and information with their environment.

We clearly must have a nest hierarchy of levels of abstraction and levels approximation and this calls for a theory of approximation. Some kind of metric and topology on E, T and the functional space of functions ϕ is necessary. Note that all the previous concepts carry over directly to the Lagrangian approach as well. In this approach a major problem involves formalising the way in which cells can combine with each other to form more complex being. If we consider the example of biochemistry then we see that complex beings made up from many cells have to be treated as units well and that their will have their own quality space Q which will contain elements not possible to be realise by a single eE. This suggests that we need to add a new relation on E to account for the joining and combination of cells and to generalise the definition of ϕ:E×TQ.

We take the Lagrangian approach. We now add a junction relation J on E. When eJe then e and e are to be seen as forming an irreducible being whose state cannot be decomposed in terms of the states of e and e. The state transition function must not only take into account all the neighbours of a cell e but all the cells that are joined to any of these neighbours.

Let J be the transitive closure of J. Let EJ denote the set of subsets of E such that for each SE we have that if e,eS then eJe. Inclusion induces a partial order on E. Instead of Q we consider a set Q of different quality spaces Q,Q, Q,...which represent the states of different possible combinations of cells. Let us assume that Q represents as previously the states for single cells. For instance a combination of three cells will have states which will not be found in the combination of two cells or a single cell. Suppose e and joined to e and the conglomerate has state qQ. Then we can consider e and e individually and there is function which restricts q to states q1 and q2 of e and e. In category theory there is an elegant way to combine all this information: the notion of presheaf. To define the state functions for a given time t we must consider a presheaf:
ΦJ:EJopQ
The state of the universe at given instant will be given by compatible sections of this presheaf. To define this we need to consider the category of elements El(Q) associated to Q whose objects consists of pairs (Q,a) where aQ and morphisms f:(Q,a)(Q,a) are maps f:QQ which preserve the second components f(a)=a. Thus a state function at a given time is given by a functor:
ϕJ:EJEl(Q)
But J can vary in time and we need a state transition function for J itself which will clearly also depend on ϕJ for the previous moment. Thus the transition function will involve a functor:
JJ:hom(EJ,El(Q))Rel(E)
and will yield a functor
ϕJJ(ϕJ):EJJ(ϕJ)El(Q)
Note that we could also consider a functor
E:Rel(E)Pos
which associates EJ to each J.

The relation J is the basic form of junction. We can use it to define higher-level complex concepts of connectivity such as that which  connects various regions of biological systems. We might define living systems as those systems that are essentially connected. These can be defined as systems in which the removal of any part results necessarily in the loss of some connection between two other parts. This can be given an abstract graph-theoretic formulation which poses interesting non-trivial questions. Finally we believe this model can be an adequate framework to study self-replicating systems.

Sunday, December 22, 2024

Some topics in the philosophy of nature

The relationship between the concepts of determinism, predetermination, computability, cardinality, causality and the foundations of the calculus. To study this we need a mathematical general systems theory, hopefully general enough for this investigation. 

It is clear that 'determinism' is a very complex and ambiguous term and that it only has been given rigorous sense in the case of systems equivalent to Turing machines which are a case of finite or countably infinite systems.  Note that there are finite or countably infinite systems which are not computable and hence not deterministic in the ordinary sense of this term. Thus this sense of determinism implies computability which in turn implies that to determine  the evolution of the system we need consider only a finite amount of information involving present or past states. And we should ask how the even more complex concept of 'causality' comes in here. What are we to make of the concept of causality defined in terms of such computable determinism ? Note that a system can be considered deterministic in a metaphysical sense without being in fact computable.

A fundamental problem is understanding the role of differential (and integral) equations in natural science and the philosophy of nature.  The key aspect here is:  being an uncountable model and the expression of causality in a way distinct from the computational deterministic model above.  Note the paradox: on one hand 'numerical methods' are discrete, computable, deterministic approximations of differential models. One the other hand the differential models used in science are clearly obtained as approximations and idealizations of nature, for instance in the use of the Navier-Stokes equations which discards the molecular structure of fluids.

One problem is to understand the causality and determinism expressed in differential models in terms of non-standard paradigms of computation beyond the Turing limit. One kind of hypercomputational system can be defined as carrying out a countably infinite number of computational steps in a finite time.

For a mathematical general systems theory we have considered two fundamental kinds of systems: these are transpositions to generalized cellular automata/neural networks  of the Eulerian and Lagrangian approaches  to fluid mechanics.  It is clearly of interest to consider non-countable and hypercomputational versions of such general cellular automata: to be able to express differential models in a different way and to generalize them by discarding the condition of topological locality (already found in integral-differential equations and the convolution operation, Green's function, etc.).

The deep unsolved problems regarding the continuum are involved here as well as their intimate connection to the concepts of determinism, causality, computability and the possibility of applying differential models to nature. 

A special case of this problem involves a deeper understanding of all the categories of functions deployed in modern analysis: continuous, smooth, with compact support, bounded variation, analytic, semi- and sub-analytic, measurable,  Lp, tempered distributions, etc. How can 'determinism' and even computability be envisioned in models based on these categories?

What if nature was ultimately merely measurable rather than continuous ? That is, the temporal evolution of the states of systems modeled as a function ϕ:TS must involve some kind of merely measurable map ϕ ? Our only 'causality' or 'determinism' then must involve generalized derivatives in the sense of distributions. And yet the system can  still be deterministic in the metaphysical sense and even hypercomputational in some relevant sense. Or maybe such maps are generated by sections of underlying deterministic continuous processes ? 

General determinism and weak causality involve the postulating of properties of the evolution of the system which may not be logically or computationally sufficient to predict the evolution of the system in practice. This is similar to the situation in which given a recursive axiomatic-deductive system we cannot know in practice if a given sentence can be derived or not. Also constructions like the generalized derivative of locally integrable functions involve the discarding a much information.

For quantum theory: actual position and momentum are given by non-continuous measurable functions over space-time (we leave open the question of particle or wave representations). The non-continuity implies non-locality which renders, perhaps, the so-called 'uncertainty principle' more intelligible. The wave-function ψ is already a kind of distribution or approximation containing probabilistic information. Quantum theory is flawed because the actual system contains more information than is embodied in the typical wave-function model - a situation analogous to the way in which the generalized derivative involves discarding information about the function.

Uncertainty, indeterminism, non-computability are a reflection thus not of nature itself but of our tools and model-theoretic assumptions. In the same way it may well be that it is not logic or mathematics that are 'incomplete' or 'undecidable' but only a certain paradigm or tool-set that we happen to choose to employ.

Another topic: the study of nature involves hierarchies of models which express different degrees and modes of approximation or ontological idealization - but forcefully ordered in a coherent way. Clearly the indeterminism or problems of a given model at a given level arise precisely from this situation; small discrepancies at a lower level which have been swept under the rug can in the long-run have drastic repercussions on higher-level models, even if most of the times they can be considered negligeable. And we must be prepared to envision the possibility that such hierarchies are imposed by the nature of our rationality itself as well as by experimental conditions - and that the levels may be infinite.

Computation, proof, determinism, causality - these are all connected to temporality, with the topology and linear order of time and a major problem involves the uncountable nature of this continuum.

In mathematical physics we generally have an at least continuous function from an interval of time into Euclidean space, configuration space or a space-time manifold. This to describe a particle or system of particles. More generally we have fields (sections of finite dimensional bundles) defined on such space which are in general at least continuous, often locally smooth or analytic.  This can be generalized to distributions, to fields of operators or even operator valued distributions.  But what if we considered, at a fundamental level,  movements and fields which were merely measurable and not continuous (or only section-wise continuous) ? Measurable and yet still deterministic. Does this even make sense ? At first glance 'physics' would no longer make sense as there would no longer be any locality or differential laws. But there still could be a distribution version of physics and a version of physics over integrals. If the motion of a particle is now an only measurable (or locally integrable) function ϕ:TR3. Consider a free particle. In classical physics if we know the position and momentum at a given time then we know the position (and momentum) at any given time (uniform linear movement). But there is no canonical choice for a non-continuous function. Given a measurable functions f:TR3 we can integrate and define a probability density ρ:R3P which determines how frequently the graph of f intersects a small neighbourhood of a point x. But what are we to make of a temporally evolving ρ (we could consider a rapid time, at the Planck scale and a slow time) ?

Tentative definition of the density function:

ρf(x)=limxUμ(f1U)m(U)

where μ is a Borel measure on T and m the Lebesgue measure on R3. Question: given a continuous function g:RnK where K is either the real of complex functions and a  (signed) Borel measure μ on TR is there a canonical measurable non-continuous functions f:TRn such that ρf=g ? It would seem not.  Any choice among possible 'random' candidates implies extra information.  And we need to make sense of this question for continuous families gt of continuous functions, for example g(t)=eiπt. The differential laws of gt might need to be seen as finite approximations.

Define real computable process. 

Another approach: we have a measurable map f:TA×B. Suppose that we know only fA(t0) and not fB(t0) while the knowledge of both would be theoretically enough to compute f(t) for t>t0.  Then given a UA×B we can take the measure of the set VB such that if fB(t0)V then f(t)U.  

If a trajectory is measurable and not continuous, does velocity or momentum even make sense ? 

For f:TR3 measurable (representing the free movement of a single particle) we can define for each IT, ρI(x)=limxUμ(f1U)Im(U) which can be thought of as a generalized momentum but where causality and temporal order are left behind. Thus we could assign to each open interval IT a density function ρI:R3K. We can then postulate that the variation of the ρI with I is continuous  in the sense that given a ϵ we can find a δ such that for any partition Ii of T with d(Ii)<δ we have that ||ρIi+1ρIi||<ϵ for some suitable norm.

This construction can be repeated if we consider hidden state variables for the particle, that is f:TR3×H for some state-space H. Of course we cannot in practise measure H at a given instant of time for a given particle.  Note also that if we have two measurable maps then indiscernibility follows immediately - individuation is tied to continuity of trajectories.

Space-time is like an fluctuating ether which induces a Brownian-like motion of particle - except not continuous at all only measurable. Maybe it is H that is responsible for directing the particle (a vortex in the ether) and making in behave classically is the sense of densities.  

A density function (for a small time interval) moving like a wave makes little physical sense. Why would the particle jump about in its merely measurable trajectory and yet have such a smooth deterministic density function ? It is tempting to interpret the density function as manifesting some kind of potential - like a pilot wave.

The heat equation t=kx2 represents a kind of evening out of a function f, valleys f>0 are raised and hills f<0 are leveled. But heat is a stochastic process. Maybe this provides a clue to understand the above - except in this case there is only one very rapid particle playing the role of all the molecules. 

Another approach: given a continuous function in a region ϕ:UR+ construct an nowhere continuous function τ:TU such that ϕ is the density of ϕ in T. This is the atomized field. The Schrödinger equation is an approximation just like the Navier-Stokes equation ignoring the molecular structure of fluids.

Newton' first law of motion expresses minimality and simplicity for the behaviour of a free particle.  We can say likewise that an atomized field if free is completely random spread out uniformly in a region of space. As yet without momentum. Momentum corresponds to a potential which directs and influences the previously truly free atomized field.  Our view is that a genuinely free particle or atomized field is one in which the particle has equal probability of being anywhere (i.e. it does not have cohesion, any cohension must be the effect of a cause). Thus Newton's free particle is not really free but a particle under the influence of a directed momentum field. There are rapid processes which create both the atomized field (particle) and field.

Why should we consider a gravitational field as being induced by a single mass when in reality it only manifests when there are at least two ? 

In Physics there are PDEs which are derived from ODEs of physics at a more fundamental level and there are PDEs that are already irreducibly fundamental.

A fundamental problem in philosophy: the existence of non-well posed problems (in PDEs), even with smooth initial conditions.  This manifests not so much the collapse of the differential model of determinism but the essentially approximative nature of the PDE modelling. Philosophically the numerical approximation methods and the PDEs might be place on equal grounds. They are both approximations of reality.  Even the most simple potential (the gravitational field of a point particle) must have a discontinuity.

Weak solutions of PDEs are in general not unique. Goodbye determinism. The problem with mathematical physics is that it lacks an ontology beyond the simplest kind. It is applied to local homogenous settings - or systems which can be merged together in a straightforward way.  It lacks a serious theory of individuality and interaction - which is seen in the phenomena of shock waves. 

The above considerations on quantum fields are of course very simple and we should address rather the interpretation of quantum fields as (linear) operator valued distributions (over space-time) (see David Tong's lectures on QFT). This involves studying the meaning of distributions and the meaning of  (linear) operator fields - and of course demanding perfect conceptual and mathematical rigour with no brushing infinities under the carpet. And consider how a Lagrangian is defined for these fields involving their derivatives and possibly higher powers (coupling, self-interaction, "particle" creation and annihilation). What does it even mean to assign to each point an operator on a Hilbert space (Fock space) ? How can this be interpreted according to the approach above ?

But we have not even touched upon some of the fundamental elements of physics: Lagrangian densities (why the restrictions on its dependencies ?), the locality of the Lagrangian,  the principle of least action, Noether's theorem, Lorenz invariance,  the energy-momentum tensor. But we consider the distinction between scalar and vector fields to be of the utmost mathematical and philosophical significance. 

And what are some points regarding classical quantum field theory ? The interplay between the Heisenberg and Schrödinger picture in perturbation theory. That our 'fields' now are our canonical coordinates seen as fields of operators. That now we have a whole calculus of operator valued functions (for example apeipx where p is a momentum vector and ap is the corresponding creation operator): PDEs,  integral solutions, Green functions, propagators, amplitudes via scattering matrices, etc.  That the field itself is now beyond space and time, it is not a function of space and time, but recalls rather the Alayavijnana in Yogâcâra philosophy, physics is a excitation via such operator fields of this primordial field (and we will not enter here into a discussion of zero energy and the Casimir effect). 

How do we deploy our approach then to elementary QFT ? Perhaps consider a merely measurable field (not necessarily continuous) MA in which M is space-time and A is some structure over which it is possible to define topologies, σ-algebras and do calculus.

The structure that replaces the real or complex field in the operator calculus might be seen most naturally as a C* algebra.  But operators act on a Hilbert space. So we need to consider that we also have that our C* algebras have representations on a given inner product space F. Thus a field takes space-time points to representations of C* algebras on a space F. Amplitudes 0|ϕ(x)ϕ(y)|0 (here 0 represents the ground state, not the null element of a vector space) are obtained by applying the representation to |0 and then taking the inner product again with |0. More generally this gives amplitudes for certain transitions. The actual state space for a quantum field (with its excitations) is completely non-localized. But could this be given a topological interpretation (without having to go global) for instance, as (co)homology of a bundle ? 

Addendum to our paper Hegel and modern topology:  for physics and systems theory the most developed and relevant sections in the Logic are found in the section on Object (mechanism, chemism and teology) and the first parts of the section of Idea (with regards to process and life). For the relevance to topology and geometry this is a good point of entry, with special focus on self-similarity (maybe in quantum field theory ?) and goal-oriented systems. The final parts of Idea are clearly about self-reference, self-knowledge, self-reproduction, self-production and clearly culminate in a form of abstract theology.  Concept-Idea is both essentially self-referential (it is knowledge knowing itself, and knowledge that is being and being that is knowledge, and also process and goal)  and self-productive as well as generative, in the sense that it generates or emanates nature. 

What might be the deeper philosophical significance of the delta function in physics, the fact that its Fourier transform (in the sense of Fourier transforms of distributions) is a constant function ? It seems to have something to do with the correspondence between space and time.

The following is certainly very relevant to our discussion on determinism, causality and differential vs. measurable models.

1. Can thermodynamics be deduced mathematically from a more fundamental physical theory ?

2. Could we consistently posit a fifth force in nature which manifests in a decrease in entropy ?

The problem here is that many ordered states can be conceived as evolving as usual into the same more highly disordered state.  This can even be approached by attempting to give an underlying deterministic account (as in the kinetic theory of gases).  Thus thermodynamics just gives general dynamical systems results that apply to the specific systems of nature.

But if a new force manifested in the sense of decreasing entropy, then a reconciliation with determinism would be more problematic: from a single chaotic state there is a multitude of ordered states which it could (apparently, at least) consistently evolve to (emerge). Thus there seems to be some kind of choice, freedom, collapse of the wave function like process.

Perhaps in nature there is a conservation law involving entropy and anti-entropy.  Life is a manifestation of the anti-entropy which balances out the usual entropy increase in physics.

Like consciousness, causality, determinism, proof, number and computation - entropy is intimately connected to the direction and linear flow of time. 

The big question is: are there finitary computational or differential deterministic processes which have entropy-decreasing behavior (do they evolve complex behaviour, self-reference, etc.) ?  We would say that this seems indeed not to be the case. Thus we need to move on to: infinitary (hyper) computational systems and to deterministic but not necessarily continuous systems. There is indeed a connection between the problems of the origin and nature of life and Gödel's incompleteness theorems and the problems in quantum field theory.

Differential models and their numerical approximation methods are some of the most successful and audacious applications and apparent extensions of our finitary computable paradigm. But they cannot explain or encompass everything. 

If we postulated that space and time were discrete, then there could be no uniform linear motion, for instance going two space units in 3 time units. At the second time unit the particle  could claim equally to be in 1st or 2nd space cell - hence a situation of uncertainty.  For more complex movement the uncertainty distribution can be greater. 

The Hydrogen atom: examine carefully the methods of attaining the solutions of the Schrödinger equation in this instance and see if the solutions (involving spherical harmonics) can be given other physical interpretations (of the electron 'clouds') along the lines of our proposal.

What we need to do: elaborate the mathematical theory of how a free "quantum" particle (i.e. a particle with a completely discontinuous random trajectory in space) comes under the influence of a classical potential.  

Since we cannot write a differential equation for the non-continuous trajectory our determinism must be defined by a differential equation on the probability density (as explained above).  Take a potential and the laplace equation. Physically if the 'particle' is influenced by the potential then the totally dispersed 'cloud' will be attracted and reshaped according to the solutions of the equation.

Monday, September 9, 2024

New logical investigations

Let us face it. We know and understand very little about the 'meaning' of such homely terms as 'water' (mass noun). Meaning is not 'inscrutable' just very complex and has not been investigated with complete candor or penetrating enough insight.

A linguistic segment may acquire individual additions or variations of meaning depending on linguistic context  (there is no water-tight segmentation) and yet still contain a certain invariant meaning in all these cases - all of which cannot be brushed away under the term 'connotation'.  For instance compare the expressions 'water is wet', 'add a little water' and 'the meaning of the term 'water''. 

This is clearly related to psychologism and its problems and the inter-subjective invariance of meaning.

In literary criticism there is actually much more linguistic-philosophical acumen, for example in asking 'what does the term X mean for the poet' or 'explain the intention behind the poet's use of the term X'.

Let us face it. Counterfactuals and 'possible worlds' if they are no make any sense at all demand vastly more research and a more sophisticated conceptual framework. We do not know if there could be any world alternative (in any degree of detail) to the present one.  The only cogent notion of 'possible world' is a mathematical one or one based on mathematical physics. There is at present no valid metaphysical or 'natural' one - or one not tied to consciousness and the problem of free-will. 

Given a feature of the world we cannot say a priori that this feature could be varied in isolation in the context of some other possible world. For instance imagining an alternative universe exactly like this one except that the formula for water is not H2O is not only incredibly naive but downright absurd.

Just as it is highly problematic that individual features of the world could vary in isolation in the realm of possibility so too is it highly problematic that we can understand the 'meaning' of terms in isolation from the 'meaning' of the world as a whole.

There is no reason not to consider that there is a super-individual self (Husserl's transcendental ego or Kant's transcendental unity of apperception ) as well as a natural ego in the world.  What do we really know about the 'I', the 'self' , all its layers and possibilities ? The statement 'I exist'  is typically semantically complex and highly ambiguous. But it has at least one sense in which it cannot be 'contingent'. Also considerations from genetic epistemology can lead to doubt that it is a priori.  

There are dumb fallacies which mix up logic and psychology, ignore one of them, artificially separate them or ignore obvious semantic distinctions. And above all the sin of confusing the deceptively simple surface syntax of natural language with authentic logical-semantic structure ! For instance: 'Susan is looking for the Loch Ness Monster' and 'Susan is looking for her cat'.  It is beyond obvious that the first sentence directly expresses something that merely involves Susan's intentions and expectations whilst the second sentence's most typical interpretation involves direct reference to an actual cat. The two sentences are of different types.

We live in the age of computers and algorithms.  Nobody in their right mind would wish to identify a 'function' with its 'graph' except in the special field of mathematics or closely connected areas. If we wish to take concepts as functions (or take functions from possible worlds to truth values) then obviously their intensional computational structure matters as much as their graphs. Hence we bid fair-well to the pseudo-problems of non-denoting terms.

Proper names are like titles for books we are continuously writing during our life - and in some rare cases we stop writing and discard the book. And one book can be split into two or two books merged into one.

It is very naive to think that in all sentences which contain so-called 'definite descriptions'  a single logical-semantic function can be abstracted.  We must do away with this crude naive abstractionism and attend to the semantic and functional richness of what is actually meant without falling into the opposite error of meaning-as-use, etc.

For instance 'X is the Y' can occur in the context of learning: a fact about X is being taught and incorporated into somebody's concept of X. Or it can be an expression of learned knowledge about of X: 'I have been taught or learned that X is the Y' or it can be an expression of the result of an inference : 'it turns out that it is X that is the Y'. Why must all of this correspond to the same 'proposition' or Sinn ?

Abstract nouns are usually learnt in one go, as part of linguistic competence, while proper names reflect as evolving, continuous, even revisable learning process. Hence these two classes have different logical laws.

The meaning of the expression 'to be called 'Mary'' must contain the expression 'Mary'. So we know something about meanings ! 

How can natural language statements involving dates be put into relationship to a events in a mathematical-scientific 'objective' world (which has no time or dynamics) when such dates are defined and meaningful only relative to human experience ? What magically fixes such a correspondence ? This goes for the here and now in general ? What makes our internal experience of a certain chair correspond to a well-defined portion of timeless spatial-temporal objectivity ?

What if most if not all modern mathematical logic could be shown to be totally inadequate for human thought in general and in particular philosophical thought and the analysis of natural language ? What if modern mathematical logic were shown to be only of interest to mathematics itself and to some applied areas such as computer science ? 

By modern mathematical logic we mean certain classes of symbolic-computational systems starting with Frege but also including all recent developments. All these classes share or move within a limited domain of ontological, epistemic and semantic presuppositions and postulates.

What if an entirely different kinds of symbolic-computational systems are called for to furnish an adequate tool for philosophical logic, for philosophy, for the analysis of language and human thought in general ? New kinds of symbolic-computational systems based on entirely different ontological, epistemic and semantic postulates ? 

The 'symbols' used must 'mean' something, whatever we mean by 'meaning'. But what, exactly ? Herein lies the real difficulty. See the books of Claire Ortiz Hill.  It is our hunch that forcing techniques and topos semantics will be very relevant.

However there remains the problem of infinite regress: no matter how we effect an analysis in the web of ontology, epistemology and semantics this will always involve elements into which the analysis is carried out. These elements in turn fall again directly into the scope of the original ontological, epistemology and semantic problems. 

If mathematics, logic and philosophy have important and deep connections in was perhaps the way that these connections were conceived that were mistaken. Maybe it is geometry rather than classical mathematical logic that is more directly relevant to philosophy.

What if a first step towards finding this new logic were the investigation of artificial ideal languages (where we take 'language' in the most general sense possible) and the analysis of the why and how they work as a means of communications.

Consider an alien race that only understood first-order logic. How would we explain the rules of Chess, Go or Backgammon ? And how do we humans understand and learn the rules of these games when their expression in first-order logic is so cumbersome and convoluted and extensive ?  Expressing them in a programming language is much simpler...perhaps we need higher-level languages which are still formal and can be reduced to lower-languages as occasion demands. How do we express natural language high-level game concepts, tactics and strategy, in terms of low-level logic ?

Strange indeed to think that merely recursively enumerable systems of signs can represent or express all of reality...how can uncountable reality ever be capture with at most countable languages (cf Löwenheim-Skolem theorems, the problems with categoricity, non-standard analysis, etc.) ? 

All mathematical logic - in particular model theory - seems to be itself to presuppose that it is formalizable within ZF(C). Is this not circular ?  Dare to criticize standard foundations, dare to propose dependent type theory, homotopy type theory, higher topos theory as alternative foundations. 

The Löwenheim-Skolem theorems cannot be used to argue for the uncertainty or imprecision of formal systems because, for instance (i) these results are focused on first-order logic and the situation for second and higher-order logic is radically different (for instance with regard to categoricity). (ii) according to the formal verification principle these metatheorems themselves have to be provable in principle in a formal metasystem. If we do not attach precise meaning to the symbols and certainty to the deductive conclusion in the metasystem what right have we to attach any definite meaning or certainty to the Löweinhem-Skolem theorems themselves ?  

But of course the formal verification principle needs to formulated with more precision for obviously given any sentence in a language we can always think of a trivial recursive axiomatic-deductive system in which this sentence can be derived.  The axiomatic-deductive systems has to satisfy properties such as axiomatic-deductive minimality and optimality and type-completeness, i.e., it must capture a significantly large quantity of true statements of the same type - the same 'regional ontology'. Also the axioms and primitive terms must exhibit a degree of direct, intuitive obviousness and plausibility. And the system must ideally be strong enough to express the 'core analytic' logic.

The formal mathematics project might well be the future of mathematics itself.

The problems of knowledge: either we go back to first principles and concepts, the seeds, but loose the actual effective development, unfolding, richness, life - and also having to bear in mind that the very choice of principles might have to change according to goals and circumstance -  or else we delve into the unfolding richness of science but become lost in the alleys of specialization and limited, partial views.  Either we are too far away to see detail and life or we are too close to see anything but a small part and miss the big picture. Also when we are born into the world 'knowledge' is first forced onto us, there is both contingency and necessity. It is only later that we review what we learnt.  A great step is when we step back to survey knowledge itself, attempt to obtain knowledge about knowledge, to criticize knowledge. Transcendental knowledge is not the same as the ancient project of 'first philosophy'.

If we take natural deduction for first-order logic and assume the classical expression of in terms of , then we do not need the natural deduction rules for at all. This can be used as part of my argument related to ancient quantifier logic.  Aristotle's metalogic in the Organon is second-order or even third-order.

Overcoming the categories and semantics - or rather showing their independence and holism. With this theme we can unite such disparate thinkers as Sextus, Nâgârjuna, Hume and Hegel - and others to a lesser extent (for instance Kant). Notice the similarity between the discussion of cause in Sextus, Nâgârjuna and Hegel. The difference is that Sextus aims for equipollence, Nâgârjuna to reject all the possibilities of the tetralemma while Hegel continuously subsumes the contradictions into more contentful concepts hoping thereby to ladder his way up to the absolute. And yet how pitiful is the state of logic as a science....once we move away from classical mathematics and computer science.  The idea of a formal mathematical logic (or even grammar) adequate for other domains of thought, remains elusive ! 

We can certainly completely separate the content and value of Aristotle's Organon and Physics from Aristotle's politics and general world-view. Can we do this for Plato too ? 

Cause-and-effect. The discrete case. Let Q denote the set of possible states of the universe at a given time and denote the state at time t by q(t). Then this will depend on the set of previous values of t. Thus determinism is expressed by  functions ft:Πt<tQQ. Now suppose that Q can be decomposed as SB where B represents a kind of proto-space and S local states for each element of bB (compare the situation in which an elementary topos turns out to be a Grothendieck topos).  Now we can ask about the immediate cause of the states of certain subsets of B at a time t - that is the subset of B who variation of state would change the present state.  But a more thorough investigation of causality must involve continuity and differentiability in an essential way. Determinism, cause-and-effect depend on the remarkable order property of the real line and indeed on the whole problem of infinitesimals...

The problem with modern physics is that it lacks a convincing ontology. Up to now we have none except the division into regions of space-time and their field-properties. Physics should be intuitively simple. But all ontologies are approximative only and ultimately confusing.

Does Lawvere's theory of quantifiers as adjoints allow us to view logic as geometry ? corresponds to projection and to containment of fibers. Let π:X×YX be the canonical projection and let a geometric object PX×Y represent a binary predicate. Then yP(x,y) is represented by the predicate π(P)X and yP(x,y) is represented by {xX:π1(x)P}. For monadic predicates we use π:X{} so that for PX we have that xP(x)={} corresponds to P being non-empty and xP(x)={} corresponds to P=X. Combining this we see that xyP(x,y) corresponds to π(P)=X and xyP(x,y) corresponds to P containing a fiber π1(x). Exercise: interpret the classical expression of as ¬¬ geometrically.  Conjunction is intersection, disjunction is union. What is the geometrical significance of classical implication PQ as PcQ (for monadic predicates). This is only X if PQ. So it measures how far we are away from the situation of containment.

We have meaning M and project it to a formal expression E in a system S. Then we apply mechanical rules to E to obtain another formal expression E'. Now somehow must be able to extract meaning again from E' to obtain a meaning M'.  But how is this possible ? Reason, argument, logic, language - it is all very much like a board-game. The foundations of mathematics: this is the queen of philosophy.

Jan Westerhoff's book on the Madhyamaka, p. 96.  I fail to see how the property "Northern England" can depend existentially on the property "Southern England".   Because conceptual dependency only makes sense relative to a formal system.  I grant  B may be a defined property and A's definition may explicitly use B. But why can't we just expand out B in terms of the primitives of the formal system in use ? And what does it even mean for two concepts to be equal ? What are we doing when replacing a concept by its definition (and Frege's puzzle, etc.) ?  

A must read: Hayes' essay on Nâgârjuna. Indeed svabhava is both being-in-self and being-for-itself !

T.H. Green on Hume is just as good as anything Husserl or Frege wrote against psychologism or empiricism.

René Thom: quantum mechanics is the intellectual scandal of the 20th century. An incomplete and bad theory  that includes the absolutely scientifically unacceptable nonsense of the 'collapse of the wave-function'. 

Bring genetic epistemology (child cognitive development) into the foreground of philosophy. Modify Husserl's method into a kind of phenomenological regression.

When we say 'we' do we mean I +  he/she or they  - or something different ? 

There is a formal logico-mathematical perfection in Plato's earlier dialogues. It is where Aristotle, perhaps, got much of his Topics.

 If A and B are decidable predicates then AB need not be. This is important. 

The effective topos - uniform fibration - all this goes back to understand predicate logic after propositional logic is understood intensionally in terms of realizability. A proposition means all the ways it is computationally realized. Note that there has to be various ways because of disjunction. This is purely intensionality.  So a proposition's meaning is a subset of N. But does this subset have to be itself computable ? Predicates are  PNX.

Agda is the the best proof assistant. Predicates are just fibered types XSet. Agda is pure combinatoric 'lego' logic. Elegant, simple, powerful, flexible.

Zalta's encoding, the fusion of properties into an abstract individual object - this is a benign form of self-reflection or return-to-self whereby a property may be predicated of itself in second-order logic.

What does it mean to know something ? For instance the term 'man' is part of linguistic communities.  But how can knowing a definition have much to do with scientific knowledge in the modern sense ? And how do we account for meaning of such terms across possible worlds ? The problem is even harder for individuals which are the referents of proper names. The term 'man' would seem to conceal an open-ended horizon of facts and knowledge as indeed the term 'animal'.  But perhaps invariant under the increase of knowledge in the semantic scope of the terms is the relation between the two terms. The ancient knowledge of definitions was thus the knowledge of the invariant relation between epistemically open terms. But of course the 'difference' employed can itself be open-ended and capable of extension and refinement, but the whole idea is that is should be simpler and more stable than the genus and the species.

Difference between 'concept' and 'meaning'.  When somebody says 'man !' clearly the mental content invoked is not the sum total of one's epistemic domain related to this term - one's concept of man. Rather it is a minimal relevant 'sketch'  (and this can be ascertained by the phenomenological method). Perhaps like 'pointers' in C. Something similar must be happening for proper names. Indeed the whole problem of proper names is related to individual essences and gets tangled with the problem of determinism.  Maybe 'sketches' are like definite descriptions...which only point to a more complete concept.

The Halting problem is undecidable.  Suppose we had a machine with code u such that given the input of the code  e of a machine and input f could tell if {e}f. Now consider the machine with code d which given an input x  computes as follows: if {x}x then it never stops otherwise it does.   Does {d}d stop ? If it does that means that {d}d must never stop (contradiction). If it doesn't that means that by definition of d is does. Hence machine u cannot exist. For term-rewriting systems: there is no term rewriting system U which can be interpreted as giving the answer with regards to the derivability of a word W starting from word S with rules R.

Globality: a function f may be continuous on two disjoint (clopen) sets A and B of real numbers but fail to be so on AB. The definition of being continuous on a boundary point is problematic.

A postulate of pure reason: there is a term rewriting system T and term rewriting system S such that a derivation in T is taken as certain knowledge that a certain word cannot be derived in S.

Two visions of the absolute: the plurality of mutually and self-reflecting and interpreting and meta-interpreting axiomatic-deductive systems and the systems theoretic view of a plurality of learning dynamic communicating interaction systems which represent the whole within themselves (representation).  Thus we have the paradigm of deductive or proof systems for the first vision and computational systems for the second, though by 'computational' system we include non-standard paradigm beyond the Turing limit as well as systems inspired by biology and by consciousness. And yet it is through axiomatic-deductive systems that such systems are known and understood.

The whole and the part. How parts are organized into the whole via a relation between parts. To be able to compare and identify parts - thus to seize the type of a part and differentiate between its instances. To be able to change the whole through replacement of a particular part.

topos-HOL with reflection/representation: enc:[I,[I]].

Absolutism allows the validity of a relative relativism relative to an absolute canvas. As proof it only requires that there be some absolute absolutely knowable truth - like core arithmetic and human and animal rights -  which are then the required framework for all further meaningful relative perspectives and action.  Relativism on the other hand cannot tolerate there being any absolute whatsoever; furthermore by doing so it itself becomes a form of absolutism and thus implodes.

Systems theory in ancient philosophy:  when genus, property or definition depend essentially on interaction and relation.  The axiomatic-deductive vs. the systems theory view. And so much better the analytic and semi-analytic from a logical perspective.  This is the way to do differential equations.

What are the scientific theories which are strictly necessary for the design and manufacture of the most important technology ? And what mathematics would be strictly necessary for these scientific theories ? And are not all our functions analytic with computable coeficients - and how should we view numerical methods philosophically ?

The correct foundations for the calculus and theory of the continuum is still an open problem. We must get rid of the bad influence of ZFC foundations. Does it make sense to say that a recursive axiomatic-deductive system can 'grasp' the continuum ? We need to investigate and promote alternative foundations based on dependent type theory and category theory.

Mathematics is primarily relevant and valuable in the following aspects: i) for application and deployment in science including the clarification of the essential optimal structure of the relevant deployment and ii) as a preparation and antechamber to pure logic and philosophy - with particular emphasis on computability.    The other kind of mathematics for its own sake, without any regard for logic and philosophical foundations and proof-theoretic and conceptual optimality,  while legitimate is certainly overrated by society and certainly should not be set up as a paradigm of human 'intelligence'. The same goes for theoretical physics which has no contact with experimentation, empirical evidence or practical applications - the same goes for logically and conceptually radically incomplete or inconsistent theories.

Physics lacks an ontology. Its usual ontology is merely derived, accidental and approximative.  Consider a computer as a huge cellular automaton (CPU + RAM + storage and peripherals). How can we justify at a low level the abstraction to higher-level data structures and processes ? For instance if aliens observed the working of a computer at a low level ?

A challenge to the genus + difference template. What if we take grandfatherhood as a species of the genus family relation ? But all family relations depend on their definition on the primitive relations of fatherhood and motherhood - which are better known than other relations. 

Philosophy is something that must always keep beginning again, beginning from the beginning. But what could such a beginning be ?  Either a formal game with rules or a describing merely what is, free from suppositions (like a doctor observes symptoms).  In the first case we face the problem of the assigning of meaning to the pieces and the rules, in the second case: is it really possible to free ourselves from presuppositions and must not the description itself depend on language ? The extreme objectivity of phenomenology paradoxically becomes extreme subjectivism.

From the certain knowledge of the moral law we deduce the existence of other sentient beings. The law implies the possibility of their existence. Assuming the appearances of such beings are real can only be good. Assuming they are not real could be disastrous and bad. Hence the reality of other apparent sentient beings is a basic postulate of practical reason. Also we have an argument from the reality of mathematical concepts. Since natural world appearances participate and consistently conform to mathematics it is reasonable and plausible to assign to them some degree of reality. 

Kant in the A-version of the transcendental deduction of the pure concepts of the understanding. One passage seems to suggest an argument that can be paraphrased as pointing out that Hume's account of laws is self-defeating.  It is no good to say that our knowledge of laws comes from frequent association of certain phenomena for this itself is just stating an alleged psychological law about the human mind which itself in turn must thus be just such an inductive generalization or habit - which in fact is contradicted by experience as Schopenhauer pointed out regarding the regular succession of day and night.

In english the indefinite article is sometimes used in a definite sense: everybody has a father.

Perhaps time is already a kind of consciousness and memory of the world. Every instant the universe disappears and is replaced by a similar universe.  How can 'the' universe be real ? Which universe is 'the' universe ? Point to it (cf. the dialectic of sense certainty in the Phenomenology of Spirit).  Just as cells in living organisms are replaced, the organism is a kind of abstraction, a structured wake against the continuous flow of matter, energy and information, so too time represents the living recycling process of the universe. Time must carry information.  Everything is ultimately  a concept, but what is a concept ?

Add to discussion on Measure: well-posed problems in PDE, the continuous dependence on initial condition fails.

Three metaphilosophies

We have proposed three metaphilosophies.  Phenomenological metaphilosophy involves understanding the timeless and universal principles of th...