Pragmatic continuities in empirical science. Some examples from the history of astronomy

Maria de la Concepcion Caamano Allegre
(University of Stanford, USA)

The purpose of this work is to emphasize the importance of those pragmatic aspects of empirical science that contribute to its continuity. After the more traditional analysis of empirical science as a corpus of theories has faced serious difficulties in keeping the common intuition that scientific development is, to certain extant, cumulative in nature, and the incommensurability thesis dramatically challenged the truth-cumulative view of science, the source of scientific continuity remains unclear. Within the post-Kuhnian tradition, I. Hacking’s influential new experimentalism meant a first step towards a recognition of experimental practice as the key aspect determining scientific development. Nevertheless, the consideration of the pragmatic features of science that this recognition would demand is still tentative and disperse in the current literature in the Philosophy of Science. This work is intended as an attempt at identifying some of those features by examining the pragmatic continuities displayed by the development of astronomy, from the Babylonian period to Copernicus.     

The paper consists of two parts: the first provides a clarification of the pragmatic approach adopted, and the second offers an application of such approach to the historical examples. The basic assumption shared by the different pragmatic accounts of scientific development is that success in attaining certain goals justifies our beliefs about the world. However, there are alternative pragmatist accounts of the nature of such goals. Following Gerhard Schurz (The Role of Pragmatics in Contemporary Philosophy, Verlag Holder-Pichler-Tempsky, Vienna, 1998) in his classification of pragmatisms, the main ideas introduced here could be overall characterized as being in tune with the so called “pragmatism to the right”, or k(nowledge)-internal pragmatism (represented, among others, by C. S. Peirce, C.I. Lewis, the late H. Putnam, N. Rescher and G. Schurz), and sharply disagreeing with the “pragmatism to the left”, or k(nowledge)external pragmatism (represented, e. g., by W. James, F. C. S. Schiller, R. Rorty and S. P. Stich). According to the former, scientific practice would be guided by epistemic purposes, while, according to the latter, it would be guided by non-epistemic purposes. The analysis of the pragmatic components of knowledge pursued here focuses on the consideration of k-internal goals, since, as the examples from astronomy show, cognitive purposes (like observation, prediction, explanation, control, innovation, acquisition of true beliefs, etc.) are the primary ones leading to the effective production of empirically sound knowledge. In elucidating what kinds of knowledge and practices are more useful for producing empirically sound knowledge, it is argued that procedural knowledge, that is, knowledge about how to perform certain tasks, has proved not only specially effective but also specially cumulative in character. Both formal and empirical (k-internal) useful procedural knowledge exhibits a highly invariant character, as the astronomical case illustrates. The fact that Babylonian computations survived several scientific revolutions in astronomy is not usually emphasized in philosophical discussions. Yet Babylonian computations supplied the main source of empirical data in Ptolemy's Almagest, which in turn provided the main source of empirical data for Copernicus' De Revolutionibus (J. Evans, The History and Practice of Ancient Astronomy, Oxford University Press, Oxford, 1998). 

Finally, a further claim that the historical evidence offered in this work gives support to, is the mutual independence of truth and usefulness. The understanding of truth in terms of usefulness has been a constant theme in pragmatism (even on Nicholas Rescher's methodological pragmatism, where methodological usefulness is associated with truth-conduciveness). In this paper, on the contrary, it is argued that k-usefulness gives us a criterion of empirical soundness, having no necessary implication for truth. The continuities of empirical science are not explained here on the basis of truth, but just on the basis of k-internal usefulness.


Plant Neurobiology: Lessons for the Unity of Science

Paco Calvo Garzón
(Department of Philosophy, University of Murcia 30100, Spain
fjcalvo@um.esSpain)


Recent research (Pruitt et al., 2003; Trewavas, 2005) shows that the behaviour of eukaryotic organisms connects with the molecular level in a uniform manner. Although such a linkage might furnish inductive evidence in favour of the Unity of Science as “a pervasive trend within science” (Oppenheim and Putnam, 1958 – hereafter, O&P), it is still compatible with a reductionist strategy orthogonal to O&P’s stronger reading (“an ideal state of science”). In this paper I propose to study the integration of contemporary scientific knowledge in Cognitive Neuroscience (Gazzaniga et al., 2002) and Plant Neurobiology (Baluška et al., 2006) in order to assess the Unity of Science hypothesis. 

In particular, I shall consider time-estimation in relation to the distinction between online plant behaviour (flower heliotropism) and offline plant behaviour (leaf heliotropism) - specifically, plants’ nocturnal reorientation in the absence of solar-tracking (Schwartz and Koller, 1986). I shall argue that a mechanistic understanding via action potentials of communication at the plant/animal level shows why ontogenesis cannot lend any sort of inductive support to the Unity of Science. The safest bet for the sympathizer of O&P, I contend, is to put in perspective eukaryotes with regard to their shared unicellular ancestors, pressing thus on the indirect factual evidence that evolution provides. 


REFERENCES

Baluška, F., Mancuso, S., and Volkmann, D. (2006) Communication in Plants: Neuronal Aspects of Plant Life: Springer-Verlag.
Bechtel, W., and Richardson, R. (1993) Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research: Princeton University Press.
Cashmore, A.R. (2003) “Cryptochromes: enabling plants and animals to determine circadian time”, Cell 114, 537-543.
Gazzaniga, M, Ivry, R, and Mangun, G. (2002) Cognitive Neuroscience: The Biology of the Mind: Norton. 
Oppenheim, P., and Putnam, H. (1958) “Unity of Science as a Working Hypothesis,” in Herbert Feigl, Grover Maxwell, and Michael Scriven (eds.), Minnesota Studies in the Philosophy of Science. Minneapolis: University of Minnesota Press, 3-36.
Pruitt, R., Bowman, J., and Grossniklaus, U. (2003) “Plant genetics: a decade of integration”, Nature Genetics 33, 294-304.
Schwartz, A. and Koller, D. (1986) “Diurnal Phototropism in Solar Tracking Leaves of Lavatera cretica”, Plant Physiology 80, 778-781.
Trewavas, A. (2005) “Green plants as intelligent organisms”, Trends in Plant Science 10(9), 413-419.


Networking Disciplines: a New Approach to the Unity of Knowledge?

João Caraça
(Calouste Gulbenkian Foundation /ISEG, Portugal)


Knowledge and the languages devised for its expression and circulation within communities evolve with the times. The emergence of new conditions for technology, economy and society, based on information and knowledge, naturally brings along the need to rearrange and rethink our perception of how the diverse fields and disciplines are organized, how they communicate and interact.
The powerful effects of fragmentation that the forces of economic globalization provoke in the social order of the nation-states induce new barriers in understanding and at the same time enhance the perception of new issues which need interdisciplinary practices to be dealt with adequately. Therefore, the hope of a new unity of knowledge, to be achieved through the proper networking of disciplines looms again over our minds. This possibility, or this illusion, is analyzed and assessed.


The Symbiotic Phenomenon in the Evolutive Context

Francisco Carrapiço  
(Departament of Plant Biology, Faculty of Science, Lisbon University, Portugal)

Symbiosis has frequently been considered as a biological curiosity, and not as a scientific concept, namely in the scope of evolution. Nevertheless, symbiosis is a widespread phenomenon, with great biological relevance, that has a fundamental role in the organization and evolution of life. Despite this fact, it has not received proper attention either from the scientific community, or from the university and high school curricula. This situation reflects itself in an interpretative reality of the biological evolution, based on two classical scientific theories: Darwinism and neo-Darwinism, the only ones that, traditionally, try to explain this process and where symbiosis is not adequately understood nor handled. For traditional evolutionist authors, symbiosis is nothing more than a residual aspect of the evolution problem. Recent data, however, point in the opposite direction, showing that symbiosis is a factor of evolutive change, which cannot be included in the framework of the neo-Darwinism theory. Questions such as the following ones should have a clear and scientific answer and should not be put aside simply because they do not correspond to the mainstream discourse. Why is the symbiotic phenomenon so widespread in nature? Why and how does it perpetuate itself through time? What is its degree of importance for the intervening organisms? Why do living beings present structural, physiological and/or behavioral changes so well adapted to this relationship? What is the role of the symbiosis in the evolutionary process? In this sense, symbiogenesis should be understood as an evolutive mechanism and symbiosis as the vehicle, through which that mechanism unfolds. This fact represents a point of view which is opposed to that of the neo-Darwinism or Modern Synthesis Theory. Thus, according to today's dominant theory, evolution is a gradual process essentially consisting of a natural selection conducted on minimal phenotypical variations. However, most living forms have symbiotic relationships with microorganisms, and so symbiosis seems to play a very important role in the origin, organization and life evolution. It is an important support for the acquisition of new genomes and new metabolic capacities, which drives living forms’ organization and evolution. In this sense, the evolutionary changes can be explained by an integrated cooperation between organisms, in which symbiosis acts, not as an exception, but as the dominant rule in nature. To put it in a nutshell, is the development of life a saltational symbiotic process or a gradual process, oriented by natural selection and leading organisms to adaptation? In a way, this dilemma may be found in two positions clearly expressed by Theodosius Dobzhansky and Jan Sapp. In 1973, Theodosius Dobzansky wrote his famous article entitled “Nothing in biology makes sense except in the light of evolution”, where he transmits the neo-Darwinist view of evolution. Thirty years after this article, Jan Sapp (2003) includes in his book, “Genesis: the Evolution of Biology”, a new paradigmatic concept: “Nothing in evolution makes sense except in the light of symbiosis”, giving shape to the new ideas on the process of evolution, involving symbiotic principles. Actually, and specially since 1859 - the year of the publication of Darwin's book “On the Origin of Species” - evolution has been considered as the fundamental concept and organizing factor of modern biology, as well as its structural pillar. Without denying many of the Darwinist principles, the worse thing we could do within the study of the process of evolution would be to mix up or limit evolution to the Darwinist or neo-Darwinist perspective. This leads to the erroneous idea that Darwinism, or neo-Darwinism, are the only synonyms for biological evolution. Other evolutionist approaches exist and it is necessary for them to be deepened and discussed within biological and social sciences. In this sense, we would like to bring a set of principles and data that could be integrated in a Symbiogenic Theory of Evolution that could contribute towards a new epistemological approach of the symbiotic phenomenon in the evolutive context. This, in our point of view, could be the beginning of a new paradigm in science that rests almost unexplored.


Keeping track of Neurath's bill: Abstract concepts, stock models and the unity of classical physics

Nancy Cartwright
(Department of Philosophy, London School of Economics, UK)

with

Sheldon Steed 
(Department of Philosophy, London School of Economics, UK)

"We do not arrive at ' one' system of science that could take the place of the 'real world' so to speak; everything remains ambiguous and in many ways uncertain." In 1935 Otto Neurath penned these comments in his paper, 'Unity of science as a task'. A passage introducing the paper remarks that scientific people aim at a common procedure of inquiry by which to better understand the ambiguities and uncertainties of our world. But he asks, "Is this uniformity the logical consequence of our program? It is not; I stress again and again; I see it as a historical fact in a sociological sense." For Neurath unity of science was indeed a task, a goal. Unity is not the sort of thing that comes from discovering some small range of fundamental underlying principles of the world, from which science proceeds. But unity in procedure of inquiry is crucial for understanding one another – and for making sense of the very uncertain world in which we live. 

It is this conception of unity that resonates with the view of science we defend in this paper. We address Sheldon Smith's (2001) claims that Cartwright's (1999) The Dappled World overlooks a unity within classical physics. Attending to his criticisms, we defend one account of Cartwright's dappled view that intends to make sense of the scope and character of scientific inquiry, while doing justice to the ambiguities and uncertainties that science necessarily leaves untreated.


Bosbach and the Riecan States on Pseudo-Residuated Lattices

Lavinia Ciungu 
(Mathematics Polytechnical University of Bucharest, Romania)

The states were first introduced for commutative structures of MV-algebras by D. Mundici in [2] and for BL-algebras by B. Riečan in [3]. In the case of noncommutative fuzzy structures, they were introduced by A. Dvurečenskij in [4] for pseudo-MV algebras, by G. Goergescu in [5]  for pseudo-BL algebras  and by A. Dvurečenskij and J. Rachůnek in [6] for bounded noncommutive Rl-monoids.  

In the case of pseudo-MV algebras, A. Dvurečenskij proved in [7] that any pseudo-MV algebra M is isomorphic to
Γ(G, u)={gєG/0≤g≤u}, where (G,u) is an l-group with strong unit u. It allowed him to define a partial addition +, where x+y is defined if x≤ y‾=u-y.
The other noncommutative structures don’t have such a group representation and it was more difficult to define the notion of states for these structures. In the case of pseudo-BL algebras, G. Georgescu proved that any Bosbach state is also a Riečan state, and he left as an open problem to find an example of Riečan state on a good pseudo-BL algebra which is not a Bosbach state. In the case of good bounded noncommutative Rl-monoids, Dvurečenskij gives an aswer to this problem proving that any Riečan state is at the same time a Bosbach state.  

Inspired by the above mention results, in this paper we extend the notion of states to pseudo-residuated lattices and the final results prove that in this case any Bosbach state is also a Riečan state, but the converse is not true, which answers to Georgescu’s open problem.  
We first give the definition a pseudo-residuated lattice and we prove the basic properties of this structure, also valid for other structures such as pseudo-BL algebras, weak pseudo BL-algebras and bounded noncommutative Rl-monoids. The distance functions are defined, being very important for proving some of the main results. Then, we define the Bosbach and Riečan states on pseudo-residuated lattices, we investigate their basic properties, proving that any Bosbach state is a Riečan state. As an aswer to Georgescu’s open problem, we give an example of a Riečan state on a good pseudo-residuated lattice which is not a Bosbach state.

References

[1] A. Dvurečenkij, S. Pulmannova, “New Trends in Quantum Structures”, Kluwer Acad. Oubl., Dordrecht, Ister Science, Bratislava, 2000.  
[2] D. Mundici, “Averaging the truth- value in Lukasiewicz sentential logic”, Studia Logica 55 (1995), 113-127.  
[3] B. Riečan, “On the probability on BL- algebras”, Acta Math. Nitra 4 (2000), 3-13.  
[4] A. Dvurečenkij, “States on pseudo- MV algebras”, Studia Logica 68 (2001), 301-327.  
[5] G. Georgescu, “Bosbach states on fuzzy structures”, Soft Computing 8 (2004), 217-230.  
[6] A. Dvurečenskij, J. Rachůnek, “On Riecan and Bosbach States for Bounded Noncommutative Rl- Monoids”, Preprint Series, Mathematical
Institute Slovak Academy of Sciences, 2005.  
[7] A. Dvurečenskij, “Pseudo- MV algebras are intervals in l- groups ”, J. Australian Math. Soc. 70 (2002), 427-44
5.


A Unified Account of Irrationality

Vasco Correia
(Université Paris IV, France)

According to formal accounts of rationality, which prevail in rational choice theories, we should limit ourselves to consider as irrational cases in which an agent acts or thinks counter to his own conception of what is reasonable, in other words cases in which the agent transgresses his own norms of rationality. The formal criterion of irrationality would then be the inner inconsistency in the agent’s mind, arising either between his actions and his decisions, at a practical level, or between two of his beliefs, at a cognitive level. The interest of such a formal account is that it does not prejudge of the agent’s moral principles or values, allowing for an approach of rationality independent of any moral assumptions. 
Under these conditions, self-deception and akrasia (or “weakness of will”) appear to be the paradigmatic forms of irrationality. Admittedly, the reason for this is that in both cases the agent seems to “conspire” against himself. The akratic agent is the one that deliberately acts contrary to what he judges to be his best interest, and the self-deceiver deliberately believes contrary to what he suspects to be the truth. In appearance, at least, this undermines the most common assumptions in theories of rationality: namely, that we tend to act upon what we judge better; and that we tend to believe what seems to be the truth. In an attempt to understand these paradoxes, many authors have proposed radical hypotheses, such as the partition of the mind, the existence of mental tropisms, and of course the existence of an unconscious. 
My first claim is that irrationality (both cognitive and practical) can be understood without assuming that the mind is divided. The key to this view is the suggestion that the cause of irrationality is neither a conscious intention of acting contrary to what the agent considers himself to be reasonable (Sartre, Davidson, Pears), nor the direct effect of an unconscious desire (Freud, Audi), but rather the influence of our emotions (desires, fears, hopes, etc.) over the process of belief formation. This claim is supported by the vast empirical evidence provided by social psychology in the recent decades, clearly showing that emotions indeed distort our beliefs in a variety of ways, namely by affecting the cognitive processes involved in the formation of belief (biased attention to evidence, biased interpretation of available evidence, selective memory, selective evidence gathering, etc.). Thus, although we seem unable to believe something “at will”, simply because we want to, it may sometimes happen that our motives surreptitiously lead us to believe what we wish were the case (wishful thinking), even if this means accepting a certain proposition in the teeth of evidence, which is precisely what self-deception is about.
But could such a cognitive model of irrationality be extended to the practical sphere and successfully explain weak-willed actions? My second claim is precisely that the influence our motives may exert over our judgments also accounts for genuine cases of practical irrationality. As some authors have recently pointed out (Eslter, Ainslie, Bird), and as Socrates seems to suggest in Plato’s Protagoras, weakness of will can be understood as a sort of «myopia» that seems to affect human beings in what concerns time preferences. Loosely speaking, the idea is simply that, other things being equal, we tend to prefer immediate small rewards to future greater rewards. For example, if one has to choose between a small early reward (e.g. eating dessert after a meal) and a greater future reward (e.g. loosing some weight), the early reward might appear to be bigger than the delayed and, in consequence, one ends up eating a dessert. Thus, the acratic action does not appear to stem from a so-called “weakness of the will”, but simply from an evaluation illusion, which leads the agent to miscalculate the expected value of each option and, consequently, to choose an option that doesn’t maximize his interest. This argument will then bring us to a more general conclusion, namely that practical irrationality stems from cognitive irrationality, and that they both result of the several ways in which our desires may bias our judgments. 


Physics and Computation: Essay on the unity of science through computation

Felix Costa
(Departement of Mathematics, I.S.T (Technical Superior Institut), Lisbon Technical University, Portugal and Center for Mathematics and Fundamental Applications, Lisbon University, Portugal)

 

S. Barry Cooper and Piergiorgio Odifreddi have written the most interesting articles in our times on the philosophy of computing. In this paper I will try to reconcile their own views with established science and scientic criticism. I will take as the main reference containing pointers to many ideas exposed in previous work of the same authors. 

Computability Theory has been considered a corpse for mathematicians who did forget the old debate about whether computability theory has useful consequences for mathematics other than those whose statements depend on recursion theoretic terminology. In this context, hypercomputation is a forbidden word because it is not implementable, as foundational criticism says, although mathematicians do not mind to explore Turing degrees such as 

                                                                                                              K'''

                                                                                K

                                                                            K

We explore the origins of this criticism and misinterpretations of concepts such as super-Turing computational power. 

To make the discussion opened for all generations of mathematicians and physicists we developed our argumentation in a basis of a language of the late sixties and early seventies, decade of decline of the most enthusiastic Debates in Science: in Mathematics, in Physics, in Cosmology, etc., the times of Radio Programs by Sir Fred Hoyle, the times of the phone calls at the middle of the night between Sir Roger Penrose and Stephen Hawking, the times of the solution of Hilbert's Tenth Problem by Martin Davis, Hilary Putnam, Julia Robinson, and Yury Matiyasevich, etc.

 


The Principle of Eurhythmy. A Key to the Unity of Physics

José Croca 
(Faculty of Sciences, Lisbon University, Portugal)

The aim of Physics has always been the Unity. This ideal means that physicists look for a very basic principle from which it would, at last in principle, be possible to derive the laws for describing the physical reality at different scales of observation. Presently in physics we are faced with two independent, even opposite, theories valid at different scales. The classical physics, and the quantum physics. Classical physics good at the macroscopic scale while quantum physics is valid at the atomic and molecular level. The unification of the different physical laws, from classical physics, quantum physics to gravitic physics seems now possible. The key for this unity is the principle of Eurhythmy. The word comes from the Greek meaning, literally, the principle of the adequate path. It will be shown that Hieron´s principle of minimum path, Fermat´s principle of minimum time, basic for the classical physics, and de Broglie´s guiding principle, a cornerstone for the nonlinear causal quantum physics, are no more than mere particular cases of the principle of eurhythmy. Furthermore, it will be seen, with concrete examples, from classical physics, quantum physics to gravitic physics, that all these different branches of physics, can be unified and understood in a causal way as particular cases of this general principle. This principle allows us to see the diverse aspects at different scales of observation of the physical reality as a single coherent causal and beautiful whole.


Bibliography

J.R. Croca, Towards a Nonlinear Quantum Physics, World Scientific, London, 2003.


Argumentative Pluralism

Cédric Dégremont 
Laurent Keiff
(University of  Lille III, France)

Logical pluralism has turned a standard in professional practice. With the exception of stances with (strong) philosophical motivation, it seems a fairly shared intuition that there is no unique acceptable logical theory. One can define the subject matter of logic as “cogent reasoning” (van Benthem [1]) and the principal tool to study it has been mathematical structures for more than a century. The power of mathematical methods allowed for the description and the theoretical control of countless logical systems, but at the same time converted fruitfulness into a problem. Which ones of the infinitely many logics that are available in the literature are principled and meaningful? How can one give an account of the intuitively sound claim that cogent reasoning cannot be just anything? Our proposal here is to seek in the theory of argumentation, and its mathematical representation in game theory, the principles of a logical pluralism under control.

Dialogical logic ([2], [3], [4]) provides a conceptual setting for this proposal. Building upon a game theoretical core, dialogical logic captures the meaning of the elementary logical notions as expressions of the features of a certain type of argumentation games where two players interact, asserting claims and challenging them. More precisely, usual dialogical games are two players zero-sum well founded games, where the first player (the Proponent (P)) puts forth a main claim (the thesis of the dialogue), while the other one (the Opponent (O)) tries to refute it. The truth of a formula is understood in terms of the ability for the player who asserted it to defend it against allowed criticisms. A formula is said to be dialogically valid if the Proponent has a winning strategy in the dialogue associated with it. The rules of dialogical games belong to two categories: particle rules, that define the local form of the game according to the main connective of the formula at stake, and structural rules that define the global organisation of the game. The later define the properties of the notion of validity : one changes the logic by changing the structural rules.

A crucial feature of dialogical systems is the so-called formal restriction. This structural rules states that P can assert an atom iff O conceded this atom first. This rule has a strong argumentation theoretic content: it amounts to say that a claim of P is valid if he is able to defend it using only arguments that are grounded on the assertions (or arguments) of O, in the sense that all the non-logical claims are first made by O. Thus the properties of a dialogical system relies essentially on the way P can use the information conceded by O. For instance, the differences between classical, intuitionnistic, (some) relevant and linear logic can all be expressed as constraints on the way P uses the conceded information.

Our point is thus the following. Let us define a game in which P is free to use the conceded information in a variety of ways, but with some cost defined for types of moves, according to the information flow they entail. Thus P can have a set of winning strategies in the game for a given thesis (i.e. strategies leading to a final position where O is to move and there is no legal move available to him), and the costs allow to compute the class of optimal strategies. Such strategies capture what one can see as the best ways to argue for the thesis, given some notion of what is a good argumentation which is coded in the costs of the moves. If the cost of the strategy is the cost of its most expansive move, then the cheaper strategy for a thesis indicates the class of logics in which the thesis is valid. A clear and easy implementation would give increasing costs to logics ordered by inclusion of their sets of theorems. Now if one allows for a computation of strategy cost based on the sum of the costs of the individual moves the picture gets much more intricate. At least it can be said that this possibility opens a whole field of future research on the application of decision theoretical methods and concepts in evaluating argumentative strategies.

References

[1] van Benthem, J. “Foreword” in Abduction and Induction, Flach P. & Kakas A. (eds.), Kluwer, Dordrecht,  2002.  
[2] Lorenz, K. & Lorenzen, P. Dialogische Logik, , Wissenschaftlische Buchgesellschaft, Darmstadt, 1978.  
[3] Rahman, S. Uber Dialogue, Protologische Categorien und andere Seltenheiten, Peter Lang, Frankfurt, 1993.  

[4] Rahman, S. & Keiff, L. “On how to be a dialogician” in Logic, Thought and Action, Vanderveken D. (ed.) Springer, Dordrecht, 2004.


Integrating Mathematics in Unified Science: Carnapian and Kantian perspectives 

Jacques Dubucs
(IHPST, University of Paris I, France)

Much work has been done since Beth's and Hintiikka's analysis to vindicate Kant's philosophy of mathematics in the light of modern logic. It is today widely recognized that most of that work relies on an interpretation that distorts textual evidence at some extent and that mainly neglects the discrepance between pre-critical and transcendantal points of view. The paper proposes a new interpretation, grounded on the notion of semi-necessity, conceived as a modality halfway between mere contingency and logical necessity, and it explores the means of clarifying it in the light of the modern notion of intended model.


More information regarding this Colloquium may be obtained from the website
http://cfcul.fc.ul.pt/coloquioscentro/coll.unidade_cc.htm