Let us face it. We know and understand very little about the 'meaning' of such homely terms as 'water' (mass noun). Meaning is not 'inscrutable' just very complex and has not been investigated with complete candor or penetrating enough insight.
A linguistic segment may acquire individual additions or variations of meaning depending on linguistic context (there is no water-tight segmentation) and yet still contain a certain invariant meaning in all these cases - all of which cannot be brushed away under the term 'connotation'. For instance compare the expressions 'water is wet', 'add a little water' and 'the meaning of the term 'water''.
This is clearly related to psychologism and its problems and the inter-subjective invariance of meaning.
In literary criticism there is actually much more linguistic-philosophical acumen, for example in asking 'what does the term X mean for the poet' or 'explain the intention behind the poet's use of the term X'.
Let us face it. Counterfactuals and 'possible worlds' if they are no make any sense at all demand vastly more research and a more sophisticated conceptual framework. We do not know if there could be any world alternative (in any degree of detail) to the present one. The only cogent notion of 'possible world' is a mathematical one or one based on mathematical physics. There is at present no valid metaphysical or 'natural' one - or one not tied to consciousness and the problem of free-will.
Given a feature of the world we cannot say a priori that this feature could be varied in isolation in the context of some other possible world. For instance imagining an alternative universe exactly like this one except that the formula for water is not H2O is not only incredibly naive but downright absurd.
Just as it is highly problematic that individual features of the world could vary in isolation in the realm of possibility so too is it highly problematic that we can understand the 'meaning' of terms in isolation from the 'meaning' of the world as a whole.
There is no reason not to consider that there is a super-individual self (Husserl's transcendental ego or Kant's transcendental unity of apperception ) as well as a natural ego in the world. What do we really know about the 'I', the 'self' , all its layers and possibilities ? The statement 'I exist' is typically semantically complex and highly ambiguous. But it has at least one sense in which it cannot be 'contingent'. Also considerations from genetic epistemology can lead to doubt that it is a priori.
There are dumb fallacies which mix up logic and psychology, ignore one of them, artificially separate them or ignore obvious semantic distinctions. And above all the sin of confusing the deceptively simple surface syntax of natural language with authentic logical-semantic structure ! For instance: 'Susan is looking for the Loch Ness Monster' and 'Susan is looking for her cat'. It is beyond obvious that the first sentence directly expresses something that merely involves Susan's intentions and expectations whilst the second sentence's most typical interpretation involves direct reference to an actual cat. The two sentences are of different types.
We live in the age of computers and algorithms. Nobody in their right mind would wish to identify a 'function' with its 'graph' except in the special field of mathematics or closely connected areas. If we wish to take concepts as functions (or take functions from possible worlds to truth values) then obviously their intensional computational structure matters as much as their graphs. Hence we bid fair-well to the pseudo-problems of non-denoting terms.
Proper names are like titles for books we are continuously writing during our life - and in some rare cases we stop writing and discard the book. And one book can be split into two or two books merged into one.
It is very naive to think that in all sentences which contain so-called 'definite descriptions' a single logical-semantic function can be abstracted. We must do away with this crude naive abstractionism and attend to the semantic and functional richness of what is actually meant without falling into the opposite error of meaning-as-use, etc.
For instance 'X is the Y' can occur in the context of learning: a fact about X is being taught and incorporated into somebody's concept of X. Or it can be an expression of learned knowledge about of X: 'I have been taught or learned that X is the Y' or it can be an expression of the result of an inference : 'it turns out that it is X that is the Y'. Why must all of this correspond to the same 'proposition' or Sinn ?
Abstract nouns are usually learnt in one go, as part of linguistic competence, while proper names reflect as evolving, continuous, even revisable learning process. Hence these two classes have different logical laws.
The meaning of the expression 'to be called 'Mary'' must contain the expression 'Mary'. So we know something about meanings !
How can natural language statements involving dates be put into relationship to a events in a mathematical-scientific 'objective' world (which has no time or dynamics) when such dates are defined and meaningful only relative to human experience ? What magically fixes such a correspondence ? This goes for the here and now in general ? What makes our internal experience of a certain chair correspond to a well-defined portion of timeless spatial-temporal objectivity ?
What if most if not all modern mathematical logic could be shown to be totally inadequate for human thought in general and in particular philosophical thought and the analysis of natural language ? What if modern mathematical logic were shown to be only of interest to mathematics itself and to some applied areas such as computer science ?
By
modern mathematical logic we mean certain classes of
symbolic-computational systems starting with Frege but also including
all recent developments. All these classes share or move within a
limited domain of ontological, epistemic and semantic presuppositions
and postulates.
What if an entirely different kinds of symbolic-computational systems are called for to furnish an adequate tool for philosophical logic, for philosophy, for the analysis of language and human thought in general ? New kinds of symbolic-computational systems based on entirely different ontological, epistemic and semantic postulates ?
The 'symbols' used must 'mean' something, whatever we mean by 'meaning'. But what, exactly ? Herein lies the real difficulty. See the books of Claire Ortiz Hill. It is our hunch that forcing techniques and topos semantics will be very relevant.
However there remains the problem of infinite regress:
no matter how we effect an analysis in the web of ontology, epistemology
and semantics this will always involve elements into which the analysis
is carried out. These elements in turn fall again directly into the
scope of the original ontological, epistemology and semantic problems.
If
mathematics, logic and philosophy have important and deep connections
in was perhaps the way that these connections were conceived that were
mistaken. Maybe it is geometry rather than classical mathematical logic
that is more directly relevant to philosophy.
What if a first step towards finding this new logic were the investigation of artificial ideal languages (where we take 'language' in the most general sense possible) and the analysis of the why and how they work as a means of communications.
Consider an alien race that only understood first-order logic. How would we explain the rules of Chess, Go or Backgammon ? And how do we humans understand and learn the rules of these games when their expression in first-order logic is so cumbersome and convoluted and extensive ? Expressing them in a programming language is much simpler...perhaps we need higher-level languages which are still formal and can be reduced to lower-languages as occasion demands. How do we express natural language high-level game concepts, tactics and strategy, in terms of low-level logic ?
Strange indeed to think that merely recursively enumerable systems of signs can represent or express all of reality...how can uncountable reality ever be capture with at most countable languages (cf Löwenheim-Skolem theorems, the problems with categoricity, non-standard analysis, etc.) ?
All mathematical logic - in particular model theory - seems to be itself to presuppose that it is formalizable within ZF(C). Is this not circular ? Dare to criticize standard foundations, dare to propose dependent type theory, homotopy type theory, higher topos theory as alternative foundations.
The Löwenheim-Skolem theorems cannot be used to argue for the uncertainty or imprecision of formal systems because, for instance (i) these results are focused on first-order logic and the situation for second and higher-order logic is radically different (for instance with regard to categoricity). (ii) according to the formal verification principle these metatheorems themselves have to be provable in principle in a formal metasystem. If we do not attach precise meaning to the symbols and certainty to the deductive conclusion in the metasystem what right have we to attach any definite meaning or certainty to the Löweinhem-Skolem theorems themselves ?
But of course the formal verification principle needs to formulated with more precision for obviously given any sentence in a language we can always think of a trivial recursive axiomatic-deductive system in which this sentence can be derived. The axiomatic-deductive systems has to satisfy properties such as axiomatic-deductive minimality and optimality and type-completeness, i.e., it must capture a significantly large quantity of true statements of the same type - the same 'regional ontology'. Also the axioms and primitive terms must exhibit a degree of direct, intuitive obviousness and plausibility. And the system must ideally be strong enough to express the 'core analytic' logic.
The formal mathematics project might well be the future of mathematics itself.
The problems of knowledge: either we go back to first principles and concepts, the seeds, but loose the actual effective development, unfolding, richness, life - and also having to bear in mind that the very choice of principles might have to change according to goals and circumstance - or else we delve into the unfolding richness of science but become lost in the alleys of specialization and limited, partial views. Either we are too far away to see detail and life or we are too close to see anything but a small part and miss the big picture. Also when we are born into the world 'knowledge' is first forced onto us, there is both contingency and necessity. It is only later that we review what we learnt. A great step is when we step back to survey knowledge itself, attempt to obtain knowledge about knowledge, to criticize knowledge. Transcendental knowledge is not the same as the ancient project of 'first philosophy'.
If we take natural deduction for first-order logic and assume the classical expression of $\exists$ in terms of $\forall$, then we do not need the natural deduction rules for $\exists$ at all. This can be used as part of my argument related to ancient quantifier logic. Aristotle's metalogic in the Organon is second-order or even third-order.
Overcoming the categories and semantics - or rather showing their independence and holism. With this theme we can unite such disparate thinkers as Sextus, Nâgârjuna, Hume and Hegel - and others to a lesser extent (for instance Kant). Notice the similarity between the discussion of cause in Sextus, Nâgârjuna and Hegel. The difference is that Sextus aims for equipollence, Nâgârjuna to reject all the possibilities of the tetralemma while Hegel continuously subsumes the contradictions into more contentful concepts hoping thereby to ladder his way up to the absolute. And yet how pitiful is the state of logic as a science....once we move away from classical mathematics and computer science. The idea of a formal mathematical logic (or even grammar) adequate for other domains of thought, remains elusive !
We can certainly completely separate the content and value of Aristotle's Organon and Physics from Aristotle's politics and general world-view. Can we do this for Plato too ?
Cause-and-effect. The discrete case. Let $Q$ denote the set of possible states of the universe at a given time and denote the state at time $t$ by $q(t)$. Then this will depend on the set of previous values of $t$. Thus determinism is expressed by functions $f_t: \Pi_{t' < t} Q \rightarrow Q$. Now suppose that $Q$ can be decomposed as $S^B$ where $B$ represents a kind of proto-space and $S$ local states for each element of $b\in B$ (compare the situation in which an elementary topos turns out to be a Grothendieck topos). Now we can ask about the immediate cause of the states of certain subsets of $B$ at a time $t$ - that is the subset of $B$ who variation of state would change the present state. But a more thorough investigation of causality must involve continuity and differentiability in an essential way. Determinism, cause-and-effect depend on the remarkable order property of the real line and indeed on the whole problem of infinitesimals...
The problem with modern physics is that it lacks a convincing ontology. Up to now we have none except the division into regions of space-time and their field-properties. Physics should be intuitively simple. But all ontologies are approximative only and ultimately confusing.
Does Lawvere's theory of quantifiers as adjoints allow us to view logic as geometry ? $\exists$ corresponds to projection and $\forall$ to containment of fibers. Let $\pi: X \times Y \rightarrow X$ be the canonical projection and let a geometric object $P \subset X\times Y$ represent a binary predicate. Then $\exists y P(x,y)$ is represented by the predicate $\pi(P) \subset X$ and $\forall y P(x,y)$ is represented by $\{x \in X: \pi^{-1}(x) \subset P\}$. For monadic predicates we use $\pi: X \rightarrow \{\star\}$ so that for $P \subset X$ we have that $\exists x P(x) = \{\star\}$ corresponds to $P$ being non-empty and $\forall x P(x) = \{\star\}$ corresponds to $P = X$. Combining this we see that $\forall x \exists y P(x,y)$ corresponds to $\pi(P) = X$ and $\exists x \forall y P(x,y)$ corresponds to $P$ containing a fiber $\pi^{-1}(x)$. Exercise: interpret the classical expression of $\forall$ as $\neg\exists\neg$ geometrically. Conjunction is intersection, disjunction is union. What is the geometrical significance of classical implication $P \rightarrow Q$ as $P^c \cup Q$ (for monadic predicates). This is only $X$ if $P \subset Q$. So it measures how far we are away from the situation of containment.
We have meaning M and project it to a formal expression E in a system S. Then we apply mechanical rules to E to obtain another formal expression E'. Now somehow must be able to extract meaning again from E' to obtain a meaning M'. But how is this possible ? Reason, argument, logic, language - it is all very much like a board-game. The foundations of mathematics: this is the queen of philosophy.
Jan Westerhoff's book on the Madhyamaka, p. 96. I fail to see how the property "Northern England" can depend existentially on the property "Southern England". Because conceptual dependency only makes sense relative to a formal system. I grant B may be a defined property and A's definition may explicitly use B. But why can't we just expand out B in terms of the primitives of the formal system in use ? And what does it even mean for two concepts to be equal ? What are we doing when replacing a concept by its definition (and Frege's puzzle, etc.) ?
A must read: Hayes' essay on Nâgârjuna. Indeed svabhava is both being-in-self and being-for-itself !
T.H. Green on Hume is just as good as anything Husserl or Frege wrote against psychologism or empiricism.
René Thom: quantum mechanics is the intellectual scandal of the 20th century. An incomplete and bad theory that includes the absolutely scientifically unacceptable nonsense of the 'collapse of the wave-function'.
Bring genetic epistemology (child cognitive development) into the foreground of philosophy. Modify Husserl's method into a kind of phenomenological regression.
When we say 'we' do we mean I + he/she or they - or something different ?