Let us face it. We know and understand very little about the 'meaning' of such homely terms as 'water' (mass noun). Meaning is not 'inscrutable' just very complex and has not been investigated with complete candor or penetrating enough insight.
A linguistic segment may acquire individual additions or variations of meaning depending on linguistic context (there is no water-tight segmentation) and yet still contain a certain invariant meaning in all these cases - all of which cannot be brushed away under the term 'connotation'. For instance compare the expressions 'water is wet', 'add a little water' and 'the meaning of the term 'water''.
This is clearly related to psychologism and its problems and the inter-subjective invariance of meaning.
In literary criticism there is actually much more linguistic-philosophical acumen, for example in asking 'what does the term X mean for the poet' or 'explain the intention behind the poet's use of the term X'.
Let us face it. Counterfactuals and 'possible worlds' if they are no make any sense at all demand vastly more research and a more sophisticated conceptual framework. We do not know if there could be any world alternative (in any degree of detail) to the present one. The only cogent notion of 'possible world' is a mathematical one or one based on mathematical physics. There is at present no valid metaphysical or 'natural' one - or one not tied to consciousness and the problem of free-will.
Given a feature of the world we cannot say a priori that this feature could be varied in isolation in the context of some other possible world. For instance imagining an alternative universe exactly like this one except that the formula for water is not H2O is not only incredibly naive but downright absurd.
Just as it is highly problematic that individual features of the world could vary in isolation in the realm of possibility so too is it highly problematic that we can understand the 'meaning' of terms in isolation from the 'meaning' of the world as a whole.
There is no reason not to consider that there is a super-individual self (Husserl's transcendental ego or Kant's transcendental unity of apperception ) as well as a natural ego in the world. What do we really know about the 'I', the 'self' , all its layers and possibilities ? The statement 'I exist' is typically semantically complex and highly ambiguous. But it has at least one sense in which it cannot be 'contingent'. Also considerations from genetic epistemology can lead to doubt that it is a priori.
There are dumb fallacies which mix up logic and psychology, ignore one of them, artificially separate them or ignore obvious semantic distinctions. And above all the sin of confusing the deceptively simple surface syntax of natural language with authentic logical-semantic structure ! For instance: 'Susan is looking for the Loch Ness Monster' and 'Susan is looking for her cat'. It is beyond obvious that the first sentence directly expresses something that merely involves Susan's intentions and expectations whilst the second sentence's most typical interpretation involves direct reference to an actual cat. The two sentences are of different types.
We live in the age of computers and algorithms. Nobody in their right mind would wish to identify a 'function' with its 'graph' except in the special field of mathematics or closely connected areas. If we wish to take concepts as functions (or take functions from possible worlds to truth values) then obviously their intensional computational structure matters as much as their graphs. Hence we bid fair-well to the pseudo-problems of non-denoting terms.
Proper names are like titles for books we are continuously writing during our life - and in some rare cases we stop writing and discard the book. And one book can be split into two or two books merged into one.
It is very naive to think that in all sentences which contain so-called 'definite descriptions' a single logical-semantic function can be abstracted. We must do away with this crude naive abstractionism and attend to the semantic and functional richness of what is actually meant without falling into the opposite error of meaning-as-use, etc.
For instance 'X is the Y' can occur in the context of learning: a fact about X is being taught and incorporated into somebody's concept of X. Or it can be an expression of learned knowledge about of X: 'I have been taught or learned that X is the Y' or it can be an expression of the result of an inference : 'it turns out that it is X that is the Y'. Why must all of this correspond to the same 'proposition' or Sinn ?
Abstract nouns are usually learnt in one go, as part of linguistic competence, while proper names reflect as evolving, continuous, even revisable learning process. Hence these two classes have different logical laws.
The meaning of the expression 'to be called 'Mary'' must contain the expression 'Mary'. So we know something about meanings !
How can natural language statements involving dates be put into relationship to a events in a mathematical-scientific 'objective' world (which has no time or dynamics) when such dates are defined and meaningful only relative to human experience ? What magically fixes such a correspondence ? This goes for the here and now in general ? What makes our internal experience of a certain chair correspond to a well-defined portion of timeless spatial-temporal objectivity ?
What if most if not all modern mathematical logic could be shown to
be totally inadequate for human thought in general and in particular
philosophical thought and the analysis of natural language ? What if
modern mathematical logic were shown to be only of interest to
mathematics itself and to some applied areas such as computer science ?
By
modern mathematical logic we mean certain classes of
symbolic-computational systems starting with Frege but also including
all recent developments. All these classes share or move within a
limited domain of ontological, epistemic and semantic presuppositions
and postulates.
What if an entirely different kinds of
symbolic-computational systems are called for to furnish an adequate
tool for philosophical logic, for philosophy, for the analysis of
language and human thought in general ? New kinds of
symbolic-computational systems based on entirely different ontological,
epistemic and semantic postulates ?
The 'symbols' used must
'mean' something, whatever we mean by 'meaning'. But what, exactly ?
Herein lies the real difficulty. See the books of Claire Ortiz Hill. It
is our hunch that forcing techniques and topos semantics will be very
relevant.
However there remains the problem of infinite regress:
no matter how we effect an analysis in the web of ontology, epistemology
and semantics this will always involve elements into which the analysis
is carried out. These elements in turn fall again directly into the
scope of the original ontological, epistemology and semantic problems.
If
mathematics, logic and philosophy have important and deep connections
in was perhaps the way that these connections were conceived that were
mistaken. Maybe it is geometry rather than classical mathematical logic
that is more directly relevant to philosophy.
What if a first
step towards finding this new logic were the investigation of artificial
ideal languages (where we take 'language' in the most general sense
possible) and the analysis of the why and how they work as a means of
communications.
Consider an alien race that only understood first-order logic. How would we explain the rules of Chess, Go or Backgammon ? And how do we humans understand and learn the rules of these games when their expression in first-order logic is so cumbersome and convoluted and extensive ? Expressing them in a programming language is much simpler...perhaps we need higher-level languages which are still formal and can be reduced to lower-languages as occasion demands. How do we express natural language high-level game concepts, tactics and strategy, in terms of low-level logic ?
Strange indeed to think that merely recursively enumerable systems of signs can represent or express all of reality...how can uncountable reality ever be capture with at most countable languages (cf Löwenheim-Skolem theorems, the problems with categoricity, non-standard analysis, etc.) ?
All mathematical logic - in particular model theory - seems to be itself to presuppose that it is formalizable within ZF(C). Is this not circular ? Dare to criticize standard foundations, dare to propose dependent type theory, homotopy type theory, higher topos theory as alternative foundations.
The Löwenheim-Skolem theorems cannot be used to argue for the uncertainty or imprecision of formal systems because, for instance (i) these results are focused on first-order logic and the situation for second and higher-order logic is radically different (for instance with regard to categoricity). (ii) according to the formal verification principle these metatheorems themselves have to be provable in principle in a formal metasystem. If we do not attach precise meaning to the symbols and certainty to the deductive conclusion in the metasystem what right have we to attach any definite meaning or certainty to the Löweinhem-Skolem theorems themselves ?
But of course the formal verification principle needs to formulated with more precision for obviously given any sentence in a language we can always think of a trivial recursive axiomatic-deductive system in which this sentence can be derived. The axiomatic-deductive systems has to satisfy properties such as axiomatic-deductive minimality and optimality and type-completeness, i.e., it must capture a significantly large quantity of true statements of the same type - the same 'regional ontology'. Also the axioms and primitive terms must exhibit a degree of direct, intuitive obviousness and plausibility. And the system must ideally be strong enough to express the 'core analytic' logic.
The formal mathematics project might well be the future of mathematics itself.
The problems of knowledge: either we go back to first principles and concepts, the seeds, but loose the actual effective development, unfolding, richness, life - and also having to bear in mind that the very choice of principles might have to change according to goals and circumstance - or else we delve into the unfolding richness of science but become lost in the alleys of specialization and limited, partial views. Either we are too far away to see detail and life or we are too close to see anything but a small part and miss the big picture. Also when we are born into the world 'knowledge' is first forced onto us, there is both contingency and necessity. It is only later that we review what we learnt. A great step is when we step back to survey knowledge itself, attempt to obtain knowledge about knowledge, to criticize knowledge. Transcendental knowledge is not the same as the ancient project of 'first philosophy'.
If we take natural deduction for first-order logic and assume the classical expression of $\exists$ in terms of $\forall$, then we do not need the natural deduction rules for $\exists$ at all. This can be used as part of my argument related to ancient quantifier logic. Aristotle's metalogic in the Organon is second-order or even third-order.
Overcoming the categories and semantics - or rather showing their independence and holism. With this theme we can unite such disparate thinkers as Sextus, Nâgârjuna, Hume and Hegel - and others to a lesser extent (for instance Kant). Notice the similarity between the discussion of cause in Sextus, Nâgârjuna and Hegel. The difference is that Sextus aims for equipollence, Nâgârjuna to reject all the possibilities of the tetralemma while Hegel continuously subsumes the contradictions into more contentful concepts hoping thereby to ladder his way up to the absolute. And yet how pitiful is the state of logic as a science....once we move away from classical mathematics and computer science. The idea of a formal mathematical logic (or even grammar) adequate for other domains of thought, remains elusive !
We can certainly completely separate the content and value of Aristotle's Organon and Physics from Aristotle's politics and general world-view. Can we do this for Plato too ?
Cause-and-effect. The discrete case. Let $Q$ denote the set of possible states of the universe at a given time and denote the state at time $t$ by $q(t)$. Then this will depend on the set of previous values of $t$. Thus determinism is expressed by functions $f_t: \Pi_{t' < t} Q \rightarrow Q$. Now suppose that $Q$ can be decomposed as $S^B$ where $B$ represents a kind of proto-space and $S$ local states for each element of $b\in B$ (compare the situation in which an elementary topos turns out to be a Grothendieck topos). Now we can ask about the immediate cause of the states of certain subsets of $B$ at a time $t$ - that is the subset of $B$ who variation of state would change the present state. But a more thorough investigation of causality must involve continuity and differentiability in an essential way. Determinism, cause-and-effect depend on the remarkable order property of the real line and indeed on the whole problem of infinitesimals...
The problem with modern physics is that it lacks a convincing ontology. Up to now we have none except the division into regions of space-time and their field-properties. Physics should be intuitively simple. But all ontologies are approximative only and ultimately confusing.
Does Lawvere's theory of quantifiers as adjoints allow us to view logic as geometry ? $\exists$ corresponds to projection and $\forall$ to containment of fibers. Let $\pi: X \times Y \rightarrow X$ be the canonical projection and let a geometric object $P \subset X\times Y$ represent a binary predicate. Then $\exists y P(x,y)$ is represented by the predicate $\pi(P) \subset X$ and $\forall y P(x,y)$ is represented by $\{x \in X: \pi^{-1}(x) \subset P\}$. For monadic predicates we use $\pi: X \rightarrow \{\star\}$ so that for $P \subset X$ we have that $\exists x P(x) = \{\star\}$ corresponds to $P$ being non-empty and $\forall x P(x) = \{\star\}$ corresponds to $P = X$. Combining this we see that $\forall x \exists y P(x,y)$ corresponds to $\pi(P) = X$ and $\exists x \forall y P(x,y)$ corresponds to $P$ containing a fiber $\pi^{-1}(x)$. Exercise: interpret the classical expression of $\forall$ as $\neg\exists\neg$ geometrically. Conjunction is intersection, disjunction is union. What is the geometrical significance of classical implication $P \rightarrow Q$ as $P^c \cup Q$ (for monadic predicates). This is only $X$ if $P \subset Q$. So it measures how far we are away from the situation of containment.
We have meaning M and project it to a formal expression E in a system S. Then we apply mechanical rules to E to obtain another formal expression E'. Now somehow must be able to extract meaning again from E' to obtain a meaning M'. But how is this possible ? Reason, argument, logic, language - it is all very much like a board-game. The foundations of mathematics: this is the queen of philosophy.
Jan Westerhoff's book on the Madhyamaka, p. 96. I fail to see how the property "Northern England" can depend existentially on the property "Southern England". Because conceptual dependency only makes sense relative to a formal system. I grant B may be a defined property and A's definition may explicitly use B. But why can't we just expand out B in terms of the primitives of the formal system in use ? And what does it even mean for two concepts to be equal ? What are we doing when replacing a concept by its definition (and Frege's puzzle, etc.) ?
A must read: Hayes' essay on Nâgârjuna. Indeed svabhava is both being-in-self and being-for-itself !
T.H. Green on Hume is just as good as anything Husserl or Frege wrote against psychologism or empiricism.
René Thom: quantum mechanics is the intellectual scandal of the 20th century. An incomplete and bad theory that includes the absolutely scientifically unacceptable nonsense of the 'collapse of the wave-function'.
Bring genetic epistemology (child cognitive development) into the foreground of philosophy. Modify Husserl's method into a kind of phenomenological regression.
When we say 'we' do we mean I + he/she or they - or something different ?
There is a formal logico-mathematical perfection in Plato's earlier dialogues. It is where Aristotle, perhaps, got much of his Topics.
If A and B are decidable predicates then $A \subset B$ need not be. This is important.
The effective topos - uniform fibration - all this goes back to understand predicate logic after propositional logic is understood intensionally in terms of realizability. A proposition means all the ways it is computationally realized. Note that there has to be various ways because of disjunction. This is purely intensionality. So a proposition's meaning is a subset of $\mathbb{N}$. But does this subset have to be itself computable ? Predicates are $\mathcal{P}\mathbb{N}^X$.
Agda is the the best proof assistant. Predicates are just fibered types $X \rightarrow Set$. Agda is pure combinatoric 'lego' logic. Elegant, simple, powerful, flexible.
Zalta's encoding, the fusion of properties into an abstract individual object - this is a benign form of self-reflection or return-to-self whereby a property may be predicated of itself in second-order logic.
What does it mean to know something ? For instance the term 'man' is part of linguistic communities. But how can knowing a definition have much to do with scientific knowledge in the modern sense ? And how do we account for meaning of such terms across possible worlds ? The problem is even harder for individuals which are the referents of proper names. The term 'man' would seem to conceal an open-ended horizon of facts and knowledge as indeed the term 'animal'. But perhaps invariant under the increase of knowledge in the semantic scope of the terms is the relation between the two terms. The ancient knowledge of definitions was thus the knowledge of the invariant relation between epistemically open terms. But of course the 'difference' employed can itself be open-ended and capable of extension and refinement, but the whole idea is that is should be simpler and more stable than the genus and the species.
Difference between 'concept' and 'meaning'. When somebody says 'man !' clearly the mental content invoked is not the sum total of one's epistemic domain related to this term - one's concept of man. Rather it is a minimal relevant 'sketch' (and this can be ascertained by the phenomenological method). Perhaps like 'pointers' in C. Something similar must be happening for proper names. Indeed the whole problem of proper names is related to individual essences and gets tangled with the problem of determinism. Maybe 'sketches' are like definite descriptions...which only point to a more complete concept.
The Halting problem is undecidable. Suppose we had a machine with code $u$ such that given the input of the code $e$ of a machine and input $f$ could tell if $\{e\}f \downarrow$. Now consider the machine with code $d$ which given an input $x$ computes as follows: if $\{x\}x \downarrow$ then it never stops otherwise it does. Does $\{d\}d$ stop ? If it does that means that $\{d\}d$ must never stop (contradiction). If it doesn't that means that by definition of $d$ is does. Hence machine $u$ cannot exist. For term-rewriting systems: there is no term rewriting system U which can be interpreted as giving the answer with regards to the derivability of a word W starting from word S with rules R.
Globality: a function $f$ may be continuous on two disjoint (clopen) sets $A$ and $B$ of real numbers but fail to be so on $A \cup B$. The definition of being continuous on a boundary point is problematic.
A postulate of pure reason: there is a term rewriting system T and term rewriting system S such that a derivation in T is taken as certain knowledge that a certain word cannot be derived in S.
Two visions of the absolute: the plurality of mutually and self-reflecting and interpreting and meta-interpreting axiomatic-deductive systems and the systems theoretic view of a plurality of learning dynamic communicating interaction systems which represent the whole within themselves (representation). Thus we have the paradigm of deductive or proof systems for the first vision and computational systems for the second, though by 'computational' system we include non-standard paradigm beyond the Turing limit as well as systems inspired by biology and by consciousness. And yet it is through axiomatic-deductive systems that such systems are known and understood.
The whole and the part. How parts are organized into the whole via a relation between parts. To be able to compare and identify parts - thus to seize the type of a part and differentiate between its instances. To be able to change the whole through replacement of a particular part.
topos-HOL with reflection/representation: $enc: [ I, [I]]$.
Absolutism allows the validity of a relative relativism relative to an absolute canvas. As proof it only requires that there be some absolute absolutely knowable truth - like core arithmetic and human and animal rights - which are then the required framework for all further meaningful relative perspectives and action. Relativism on the other hand cannot tolerate there being any absolute whatsoever; furthermore by doing so it itself becomes a form of absolutism and thus implodes.
Systems theory in ancient philosophy: when genus, property or definition depend essentially on interaction and relation. The axiomatic-deductive vs. the systems theory view. And so much better the analytic and semi-analytic from a logical perspective. This is the way to do differential equations.
What are the scientific theories which are strictly necessary for the design and manufacture of the most important technology ? And what mathematics would be strictly necessary for these scientific theories ? And are not all our functions analytic with computable coeficients - and how should we view numerical methods philosophically ?
The correct foundations for the calculus and theory of the continuum is still an open problem. We must get rid of the bad influence of ZFC foundations. Does it make sense to say that a recursive axiomatic-deductive system can 'grasp' the continuum ? We need to investigate and promote alternative foundations based on dependent type theory and category theory.
Mathematics is primarily relevant and valuable in the following aspects: i) for application and deployment in science including the clarification of the essential optimal structure of the relevant deployment and ii) as a preparation and antechamber to pure logic and philosophy - with particular emphasis on computability. The other kind of mathematics for its own sake, without any regard for logic and philosophical foundations and proof-theoretic and conceptual optimality, while legitimate is certainly overrated by society and certainly should not be set up as a paradigm of human 'intelligence'. The same goes for theoretical physics which has no contact with experimentation, empirical evidence or practical applications - the same goes for logically and conceptually radically incomplete or inconsistent theories.
Physics lacks an ontology. Its usual ontology is merely derived, accidental and approximative. Consider a computer as a huge cellular automaton (CPU + RAM + storage and peripherals). How can we justify at a low level the abstraction to higher-level data structures and processes ? For instance if aliens observed the working of a computer at a low level ?
A challenge to the genus + difference template. What if we take grandfatherhood as a species of the genus family relation ? But all family relations depend on their definition on the primitive relations of fatherhood and motherhood - which are better known than other relations.
Philosophy is something that must always keep beginning again, beginning from the beginning. But what could such a beginning be ? Either a formal game with rules or a describing merely what is, free from suppositions (like a doctor observes symptoms). In the first case we face the problem of the assigning of meaning to the pieces and the rules, in the second case: is it really possible to free ourselves from presuppositions and must not the description itself depend on language ? The extreme objectivity of phenomenology paradoxically becomes extreme subjectivism.
From the certain knowledge of the moral law we deduce the existence of other sentient beings. The law implies the possibility of their existence. Assuming the appearances of such beings are real can only be good. Assuming they are not real could be disastrous and bad. Hence the reality of other apparent sentient beings is a basic postulate of practical reason. Also we have an argument from the reality of mathematical concepts. Since natural world appearances participate and consistently conform to mathematics it is reasonable and plausible to assign to them some degree of reality.
Kant in the A-version of the transcendental deduction of the pure concepts of the understanding. One passage seems to suggest an argument that can be paraphrased as pointing out that Hume's account of laws is self-defeating. It is no good to say that our knowledge of laws comes from frequent association of certain phenomena for this itself is just stating an alleged psychological law about the human mind which itself in turn must thus be just such an inductive generalization or habit - which in fact is contradicted by experience as Schopenhauer pointed out regarding the regular succession of day and night.
In english the indefinite article is sometimes used in a definite sense: everybody has a father.
Perhaps time is already a kind of consciousness and memory of the world. Every instant the universe disappears and is replaced by a similar universe. How can 'the' universe be real ? Which universe is 'the' universe ? Point to it (cf. the dialectic of sense certainty in the Phenomenology of Spirit). Just as cells in living organisms are replaced, the organism is a kind of abstraction, a structured wake against the continuous flow of matter, energy and information, so too time represents the living recycling process of the universe. Time must carry information. Everything is ultimately a concept, but what is a concept ?
Add to discussion on Measure: well-posed problems in PDE, the continuous dependence on initial condition fails.