Centro de Filosofia das Ciências da Universidade de Lisboa


Lisbon Colloquium for the Philosophy of Science

The Unity of Science: Non Traditional Approaches  

 


Abstracts


Partial Knowledge

Daniel Andler (Sorbonne, Paris, France)


L'Articulation des Sciences: Une Approche Phénoménologique

Fernando Belo
(Departmento de Filosofia, Universidade de Lisboa, Portugal)

On essayera de faire une brève présentation d’un travail achevé qui devra être publié cet automne, avec le titre LE JEU DES SCIENCES AVEC HEIDEGGER ET DERRIDA.

Une articulation entre les sciences et la philosophie d’une époque a existé au moins une fois : car la Physique d’Aristote c’est cela. Son cœur : comprendre le mouvement des vivants, ce qui avait échappé à Platon et en général aux philosophies des idéalités, y compris jusqu’à Husserl. L’inertie de la physique européenne ayant remplacé ce mouvement kath’auto, c’est par la biologie moléculaire qu’il nous faut reprendre l’essai d’articulation pour les sciences et la phénoménologie du 20e siècle.  

Fermée la parenthèse kantienne qui a séparé philosophie et sciences, dont la fécondité historique semble épuisée, on fait un pas au-delà de Kuhn : entrer dans les paradigmes des diverses sciences y repérer leurs découvertes majeures au 20e siècle, reprendre leur dimension philosophique (car engendrées à partir de/contre la physique d’Aristote) et les articuler entre elles et avec la phénoménologie (Husserl, Heidegger et Derrida)


Le laboratoire, où l’expérimentation est nécessairement fragmentaire, crée des conditions de détermination qui n’existent point dans la scène de ladite réalité : chaque théorie, pour rassembler ses fragments d’expérience, devra tenir en compte que la scène étant structurellement aléatoire, les règles étudiées au laboratoire doivent répondre de cet aléatoire. Le jeu, a écrit Derrida, c’est l’unité du hasard et de la nécessité. Soit l’exemple d’une voiture : ses règles sont étudiées au laboratoire pour jouer dans une scène aléatoire de circulation. Chez les vivants, il y a plus que cela : ils sont engendrés et nourris les uns par les autres, c’est leur scène, la loi de la jungle. Leur autonomie (programme génétique) et celle des unités sociales humaines (apprentissage, langage) leur sont données par d’autres vivants restés en retrait.
Il s’agit de mécanismes d’autonomie à hétéronomie effacée.  


Toutes ces disciplines changent de par leur mise en composition avec les autres, toutes sont toutefois respectées en leur autonomie. S’accomplit ainsi le dessein de Husserl d’articuler les sciences européennes, mais tout autrement qu’il ne l’avait envisagé, puisque elles font part de la phénoménologie à laquelle on arrive : il s’agit de philosophie avec sciences. La physique y perd sa place (kantienne) de modèle, l’inertie des graves s’avérant une ‘autonomie’ amoindrie donnée par les respectifs champs de forces.


Networking Disciplines: a New Approach to the Unity of Knowledge?

João Caraça
(ISEG / Fundação Gulbenkian, Portugal)


 

Nancy Cartwright
(Department of Philosophy, London School of Economics, UK)


A (Quasi) Minimalist Account of Scientific Explanation

Jose Diez Calzada
(Univ. Rovira i Virgili, Barcelona, Spain)


A New Interpretation of Kant's Philosophy of Mathematics

Jacques Dubucs
(IHPST, Université Paris I, France)

Much work has been done since Beth's and Hintiikka's analysis to vindicate Kant's philosophy of mathematics in the light of modern logic. It is today widely recognized that most of that work relies on an interpretation that distorts textual evidence at some extent and that mainly neglects the discrepance between pre-critical and transcendantal points of view. The paper proposes a new interpretation, grounded on the notion of semi-necessity, conceived as a modality halfway between mere contingency and logical necessity, and it explores the means of clarifying it in the light of the modern notion of intended model.


Variety of Approaches to Philosophy of Science

Anne Fagot-Largeault
(Collège de France, Paris, France)


The Changing Ethos of Science

Josè Luís Garcia
(ICS, Lisboa, Porttugal)


Vincent Hendricks
( University of Copenhagen , Denmark )


Computational Templates as Cross-Disciplinary Bridges

Paul Humphreys
(Philosophy, University of Virginia , USA )


Evolutionary Psychology and the Unity of Science

Luís Moniz Pereira
(Centre for Artificial Intelligence /CENTRIA, Universidade Nova de Lisboa, Portugal)


Unity of Science. Programs, Configurations and Metaphors

Olga Pombo
(Faculdade de Ciências, Universidade de Lisboa, Portugal)

Unity of Science is both a regulative idea and a task. That is why it has been grasped through the most extreme metaphors of an invisible totality and has gave rise to several epistemological programs and intellectual mouvements. However, before to mount up to such exemplary issues, I will pay attention to the deep, institutional configurations of Unity of Science (Library, Museum, "République des Savants", School and Encyclopaedia) and to their polyedric articulations. More than a game of complementarities, what seems to be interesting is to show that their strucured relationship is endowed with important descriptive and normative capacity.


Language Games in Logic and the Role of a Logical Language in the Encyclopedic Project of Otto Neurath

Shahid Rahman
(Department of Philosophy, Université Lille , France )

In the preface to the volume launching the series “Logic, Epistemology and the Unity of Science” Rahman and Symons suggested that Neurath’s proposal of the construction of logical language has to be connected to the role of the philosopher as building co-operation bridges between different sciences inherited from the tradition of the French Encyclopedists. However, in our view; Neurath would add to the Encyclopedist project that the role of the philosopher should be performed with help of a logical analysis of the language of the sciences. In Neurath’s papers of 1935 and 1938 the construction of a logical language is thought of an instrument of unification, a kind of lingua franca.

I will argue that language games are an adequate instrument to perform such a role in a non reductionistic way.

In this context I will discuss two frameworks which claim to implement the notion of language game in logic. Namely the framework of the indoor games of the dialogical tradition initiated by Paul Lorenzen and Kuno Lorenz and the framework of the outdoor games of the game theoretical semantics tradition initiated by Jaakko Hintikka. Furthermore I will try to show how the dispute between indoor and outdoor games including could be re-conciliated in a logical framework able to differentiate between public and private knowledge of both actions and objects and where language games as meaning mediators is commitment to humanly playable games. Here I will suggest too that the notion of humanly playable must be connected with Wittgenstein’s view of language games as having a “fictional” character.

Finally I will discuss some consequences and open problems to this approach such as Wittgenstein’ thesis of the “ineffability” of semantics.


The Vienna Circle. Context, Profile, and Development

Friedrich Stadler
(Universität Wien / Institut Wiener Kreis, Ostrich)


What could be the Unity of Science Today and do We Need it?

Jan Sebestik
(Paris, France)


Sheldon Steed
(Department of Philosophy, London School of Economics, UK)


Emergence and Scientific Explanation

John Symons
(University of  El Paso / Texas, EUA)


The Unity of Science and the Arabic Tradition
Progress and Controversy
The case of Ibn al-Haytham’s against Ptolemy

Hassan Tahiri
(Department of Philosophy, Université Lille, France)

The so called Copernican revolution is Kuhn’s most cherished example in his conception of the non cumulative development of science. Indeed, on his view the Copernican model has not only introduced a major discontinuity in the history of science but the new paradigm and the old paradigm are incommensurable i.e. the gap between the two models are so huge that the changes introduced in the new model cannot be understood in terms the concepts of the old one. The aim of the paper is to show on the contrary that the study of the Arabic tradition can bridge the gap assumed by Kuhn as a historical fact precisely in the case of Copernicus. The changes involved by the work of Copernicus arouse, on our view, as a result of a interweaving of epistemological and mathematical controversies in the Arabic tradition which challenged the Ptolemaic model. Our main case study is the work of Ibn al-Haytham who devotes a whole book to the task of refuting the implications of the Almagest machinery. Ibn al-Haytham’s al-Shuk k had such an impact that since its divulgation the Almagest stopped to be seen as the suitable model of the heavenly models. Numerous attempts have been made to find new alternative models based on the correct principles of physics following the strong appeal by both Ibn al-Haytham and, after him, Ibn Rushd. The work of al-Sh tir, based exclusively on the concept of uniform and circular motion, represents the climax of the intense theoretical research undertaken during the thirteenth and the fourteenth centuries by the so called Mar gha School. Furthermore, not only the identity of the models of al-Sh tir and Copernicus has been established by recent research, but that it was also found out that Copernicus used the very same mathematical apparatus which were developed by the Mar gha School during at least two centuries. Striking is the fact that, Copernicus uses without proving some mathematical results already proven (geometrically) by the Mar gha school three centuries before. Our paper will show that Copernicus was in fact working under the influence of the two streams of the Arabic tradition: the well known more philosophical western stream, known as physical realism and the newly discovered eastern mathematical stream. The first relates to the idea that astronomy must be based on physics and physics is about the real nature of things. The second related to the use of mathematics in the construction of models and countermodels in astronomy as developed by the Mar gha school. The case presented challenges the role of the Arabic tradition assigned by the standard interpretation of the history of science and more generally it presents a first step towards a reconsideration of the thesis of the discontinuity in the history of science. Our view is that major changes in the development of science might sometimes be non-cumulative though this is not a case against continuity understood as the result of a constant interweaving of net of controversies inside and beyond the science at stake. The paper presents a further exploration in Rahman’s concept of unity in diversity.
 


The Achilles'  heel of the Unity of Science Program

Juan Manuel Torres
(CFCUl/Universidad de Bahia Blanca, Argentina)


The Cultural Sciences and their Basis in Life. On Ernst Cassirer's Theory of Cultural Sciences

Christian Moeckel
(Humboldt-Universitaet, Berlin, Germany)


Dealing with Uncertainty in Modern Science

Dinis Pestana 
(Faculdade de Ciências, Universidade de Lisboa, Portugal)

For a while, scientists tried to alienate Heraclitus legacy, that contingence and ephemerety are at the core of reality. But quantification has been Science’s way, and metrological issues have brought to the forefront errors in measurements and the protagonism of uncertainty, and by 1920 Pólya christened the asymptotic results on the “normal” approximation  as the central limit theorem, recognizing that it is the ultimate weapon to measure with the accuracy we need, insofar as we can pay for a long run of measurements. This isn’t but one well-known instance where composing with uncertainty pays much better dividends than trying to avoid it.

Since their appearance as branches of Mathematics, Probability and Statistics have been part of the toolbox used by scientists in all fields to cut through the deep complexity of data, accompanying and sometimes preceding the advancement of science. The total probability theorem is, in fact, Descartes’ method of dealing with a complex problem, splitting it in simpler sub-problems to work out a solution as the blending of partial solutions of these sub-problems; and three centuries latter, Fisher’s analysis of variance is a brilliant and pathbreaking example that Descartes’ method can be inappropriate, that some problems must be solved as a whole, and cannot be splitted out in sub-problems. Fisher’s work also has shown that science had to move from observational gathering of data to the production of relevant data, with the new discipline he invented, Design of Experiments, since this way of dealing with information is much more rewarding in knowledge building.

In fact, information is important insofar as it is at the root of knowledge building —  and Probability and Statistics have a large share in the toolbox of methodologies that allow us to extract knowledge from the available data. In a sense, Probability and Statistics are our resources in taming uncertainty, sometimes in using its patterns as a source of knowledge by itself. Unfortunately, the formal training of scientists relies much more on the ability of dealing with ad hoc techniques than on deep understanding of the principles involved in statistical thinking. There is far too much “random search” of significant results, with few critical appraisal of the information needed to achieve conclusions, proper consideration of confounding concerns, and poor understanding of the essential role of repeatability in the experimental method.

We discuss the importance of planning experiments, issues on appropriate sample sizes, and concerns arising in metha-analytical methods, as well as the limits of of statistical tools in the construction of science from empirical evidence.


Alexandre Quintanilha
(Instituto de Biologia Molecular e Celular da Universidade do Porto, Portugal)


Scientific Reasonableness and the Pragmatic Approach to the Unity of Science

Andrés Rivadulla 
(Department of Logic and Philosophy of Science, Universidad Complutense, Spain)

The question of the unity of science is one of the most serious issues the modern philosophy of science has been concerned with from its very beginning. The idea of Unified Science was so important for the successors of the Viennese neo-positivists that an International Encyclopedia of Unified Science replaced the journal Erkenntnis in the USA, and survived until the early sixties of the past century. Rudolf Carnap’s phenomenalism in Der logische Aufbau der Welt, 1928, and Vienna-Circle’s physicalism in the thirties were faced with the philosophical justification and explanation of the idea of the unity of science, that constituted an essential part of the programme defended in Carnap-Hahn-Neurath’s document Wissenschaftliche Weltauffassung. Der Wiener Kreis, 1929.

As it is well known, the neo-positivist concern with the unity of science was intended to undermine Wilhelm Dilthey’s distinction between natural and social-cultural sciences, the so-called Geisteswissenschaften. Carnap’s Aufbau particularly was a philosophical monument erected from a logical positivist perspective in favour of the unity of science.

Verstehen vs. Erklären (Understanding vs. Explanation) was the contraposition in the methodology between Geisteswissenschaften on the one side, and the natural sciences on the other. This contraposition based on the impossibility for the Geisteswissenschaften to show an empirical success comparable with the impressive success that the natural sciences allegedly were able to show since the outset of the Scientific Revolution. And, in spite of the intended purposes of Diderot and D’Alembert’s Encyclopedie, 1740, to defend the Unity of Culture, a large tradition supporting this contraposition arouse, starting with Johann Gustav Droysen’s Grundriss der Historik, 1850, and continuing with Max Weber and Wilhelm Dilthey. The discussion was continued in the second half of the 20th century by Georg Henrik von Wright, Hans Georg Gadamer, Jürgen Habermas, among others.

In my contribution I am going to face this question in three steps. Firstly, I will proceed historically. I will point to the fact that Carnap’s approach to the Unity of Science -a view according to which it was legitimate to give up the contraposition between Naturwissenschaften, Psychologie and Geisteswissenschaften-, grounded on the philosophical mistake that it was possible to provide a sound explanation for the foundation of the whole science on an unique firm basis. Moreover, the subsequent Vienna-Circle’s physicalist attempt to save the situation was reasonably rejected by contemporary philosophers of science like Popper and Fleck, and, some years later, by the stream of methodologists that doubted the existence of a neutral empirical basis, and in general by the post-positivist epistemologists.

My second argument will be a more philosophical one: I will deal with the question about whether or not the natural sciences do harbour some kind of privileged and exclusive method for the access to reality. Moreover I will treat the question of whether the Unity of Science can be rescued by mimetically applying the method of natural sciences to social sciences like economics, empirical sociology, and others. In order to answer these questions I will scrutinize the alleged overwhelming success of classical science. In particular I will point to the so-called threefold breaking-off of determinism. Furthermore, I will also argue that, as contemporary physics shows, every form of scientific creativity, let us call it induction, abduction or preduction, only provides with means that allow us to deal fallibly with Nature.

Finally, if I am right, and the ideal of secure science reveals itself merely as a myth of  rationalism in our scientific culture, I propose to replace in the realm of science the search for rationality by the search of reasonableness. Reasonableness is a weaker demand than rationality; it is neither tied to the idea of truth as the aim of science, nor to the existence of a secure and unique scientific method. But it is a guarantee that the justification of our conjectures, decisions, and, if possible, even of our fallible beliefs, are based in critical discussion and argumentation. This way makes superfluous any sharp distinction between natural and social-cultural sciences, but also any superimposed assimilation of the Geisteswissenschaften to the natural sciences, either by the way of the foundation of the different sciences on a common ground, or by the reduction to a fictitious physicalist language, or by an assumption of the ‘superior’ method of the natural sciences.

To sum up, I conceive of the question of the unity of science as a particular case of the unity of Western culture from a pragmatic viewpoint in contemporary philosophy.


Tero Tulenheimo
(Department of Philosophy, University of Helsinki, Finland)


No more Mirror Stage: Ressonance and Coupling as Parameters of Unity of Science

Stephanie Wenner
(Berlin)


Naturalism and the Unity of Science

Jan Wolenski
(Uniwersytet Jagielloński, Poland)

The program of the unity of science (PUS) was based on various foundations. The philosophers from the Vienna Circle proposed physicalism consisting in expressing science in the unique linguistic frameworks, that is, the physicalistic language. Another possible solution takes into account methodological unity of all sciences as employing the same method, in particular, the same standards of rationality. However, any  of PUS programs have to meet serious challenges. Is mathematics “physicalisable”? Are humanities and social sciences consistent with physicalism? Which method is the method of science: induction, deduction, hypothetico-deductive, etc.? Are values and norms treatable by scientific procedures conceived from the point of view of  PUS? Is the operation Verstehen compatible with PUS?

Most versions of PUS are modelled on advanced natural sciences. This suggests sometimes that other science are still underdeveloped, for example, their mathematization did not achieve a sufficient degree. Thus, the idea of an unequal progress of science is a by-product of PUS. On the other hand, such a program is attractive in itself and will be always pursued. My claim is that linguistic and methodological versions of PUS are too limited and do not give justice peculiarities of particular fields. On the other hand, naturalism seems to be a  view which respects the unique character of all phenomena subjected to scientific research. This view can be characterized by three principles (due to Hume): (1) there exist natural things and their complexes; (2) only natural epistemic capacities are admitted in science; (3) w should trust natural epistemic capacities. Thus, the unity of science, from the point of view of naturalism, has its foundations in the unique ontological character of the world.

There are many problems to be discussed from the naturalistic point of view. Two seem to me of a particular interest, namely the status of logic and mathematics and the relation of is and ought. It seems that abstract concepts and structures investigated by formal sciences arose as natural devices securing information against its dispersion. The naturalistic solution of the is/ought problem appeals to the fact that non-derivability of modal statements from non-modal ones is a common phenomenon.


Searching the Unity of Science: from Classical Logic to Abductive Logical System

A. Aliseda   
Angel Nepomuceno
Fernando Soler 
(Departamento de Filosofía y Lógica y Filosofía de la Ciencia, Universidad de Sevilla, Spain)

 

Abduction has been considered the scientific inference par excellence and it can be used in different disciplines: linguistics, mathematics, artificial intelligence, etc. Deduction has been fully studied by classical logic and extensions and important calculi have been developed, but abduction is not just deduction in reverse. The strength of classical logic may be a sure start point to obtain abductive logical systems, in fact there are several logical models of abduction (Aliseda, 1997, Hintikka, 1997, Kakas et al. 1998, etc.), though some questions remain without a clear answer. One of them is to determine conditions for defining abductive calculi.  

Given a formal language L, a deductive logic is an entailment relation defined in a way that validity of a formal argument means that such argument belongs to such relation. It is usual to represent that A entails a (A a set of sentences of L, a a sentence) as A |-- a (A |-/- a represents that A does not entail a). The structural rules of |-- are reflexivity, monotony and transitivity, so that |-- is a closure relation. Another important property is the compactness: |-- is compact whenever for all set of formulae A and a formula a, if A |-- a, then there is a finite subset A’ of A, such that A’ |-- a.

If the connectives are ¬, &, v, and à (we omit references to quantifiers to abbreviate), operational rules, for any set of sentences A and sentences x, y z, are:

-Negation:

1.        A |-- x iff  A |-- ¬¬x

2.        If A, x |-- y and A, x |-- ¬y, then A |-- ¬x

-Disjunction:

1.        If A, y |-- x and A, z |-- x, then A, y v z |-- x

2.        If A |-- x, then A |-- x v y

-Conjunction:

1.        If A |-- x & y, then A |-- x

2.        If A |-- x and A |-- y, then A |-- x & y

-Implication:

                1. A, x |-- y iff A |-- xày

New consequence relations: pivotal-assumptions, pivotal-valuations and pivotal-rules. All of them are closure relations, so they verify reflexivity, monotony and transitivity, because of which they are supraclassical, though not all of them are compact. A calculus is a relation too. We shall say that a calculus |--* is suitable for a supraclassical consequence relation |-- iff for every set of sentences A and the sentence x, if A |--* x then A |-- x.

(A, y) is an abductive problem iff A |-/- y. Given a supraclassical consequence relation, |--, the set of all plain abductive solutions of an abductive problem (A, y) is defined as

Ab(A, y) = {x | A, x |-- y}.

Since we should be interested in consistency,

Abcon(A, y) = {x | A, x |-- y & for all y, A, x |-/- y & ¬y}.

Finally, the explicative one is  

Abex
(A, y) = {x | A, x |-- y and x |-/- y}.

An explanatory relation ==> is defined as follows: (A, y) ==>x iff x belongs to Ab(A, y) –or Abcon(A, y), Abex(A, y)-. An important result can be proved: if there is a suitable calculus |--* for a classical consequence relation |--, then an abductive calculus suitable for an explanatory relation is definable. The proof is based on the previous result: x belongs to Ab(A, y) iff A, ¬y |--* ¬x (obtained from contraposition, a derived rule).

Despite of some problems (decision problem, complexity…), this bridge between classical logic and abduction shows a real possibility of unifying scientific methodology. Bibliographical references will be given.


Direct abduction through dual resolution

A. Aliseda   
Angel Nepomuceno
Fernando Soler 
(Departamento de Filosofía y Lógica y Filosofía de la Ciencia, Universidad de Sevilla, Spain)



In the last years a debate over the characterisation of abduction as the logic of discovery has arisen in Philosophy of Science (Paavola, 2004). There are many positions in defence of abduction (Hintikka, 1998), but all of them impose essential requirements to consider abduction as the logic of discovery. 

On the other hand, parallel to this discussion, many logical and computational approaches to abduction have been developed (Kakas et al., 1998). Most of them use standard deductive calculi to attack abductive problems (Mayer and Pirri, 1993). Informally, if T is a set of formulae (which plays the role of a theory) and F a formula (the surprising observed fact), then the pair (T,F) constitutes an abductive problem whenever F does not follow (is not a logical consequence) from T. Then, E (a formula which represent the explanation) is a solution for (T,F) if T+E entail F. It is habitual the addition of more constraints to the abductive solutions. But in classical logic “T+E entail F” is equivalent to “T+(not F) entail (not E)”. When using a deductive calculus for abduction, given the abductive problem (T,F), one obtains conclusions from T+(not F) and then each of their negations is a possible explanation.

But proceeding in this way is somehow similar to reductio ad absurdum, because the abductive search starts precisely from the background theory with the negation of the observed fact. We think this is one of the causes which most contribute to move logical abduction further away from the great expectations coming from Philosophy of Science and Epistemology. It is hardly believable that to explain F one starts precisely from the negation of F. Actually, F is the only empirical datum on the abductive problem. Though every abductive solution E obtained in this way is a correct one, it is not possible to take these procedures as a logical model neither of scientific discovery nor of common sense reasoning. 

We present a logical calculus, delta-resolution (Soler et al., 2006), proposed to attack abductive problems in a more direct way than the common approaches. It is dual to classical resolution, and its starting point is the classical equivalence of “T+E entail F” with “E entails (T implies F)”, so one does not have to deny the empirical data but to look for hypotheses E which fill the gap in T in order to explain F. The definitions and properties of delta-resolution are dual to those of the standard resolution calculus. A procedure to produce consistent explanatory abduction by delta-resolution is proposed following these steps: theory analysis, observation analysis, refutation search and finally explanations search. Finally, we study the connections between them and some ideas coming from the Philosophy of Science.

References
 Hintikka, J., What is abduction? The fundamental problem of contemporary epistemology. Transactions of the Charles S. Peirce Society, 34(3):503-533, 1998.
 Kakas, A., Kowalski, R., Toni, F., The role of abduction in logic programming. Handbook of logic in Artificial Intelligence and Logic Programming, pages 235-324. Oxford University, 1998.
 Mayer, M.C, Pirri, F., First order abduction via tableau and sequent calculi. Bulletin of the IGPL, 1:99-117, 1993.
 Paavola, S., Abduction as a logic and methodology of discovery: the importance of strategies. Foundations of Science, 9(3):267-283, 2004.
 Soler, F., Nepomuceno, A., Aliseda, A., Model-based abduction via dual resolution. Logic Journal of the IGPL, 14(2), 2006.


Unité de la Science: Matrice métaphysique d’une Idée

Mafalda Blanc 
(Departamento de Filosofia, Universidade de Lisboa, Portugal)

L’unité de la science, que les divers courants épistémologiques poursuivent chacun à sa façon a fim de répondre à la croissante compléxité des problèmes et ramification des disciplines, se presente, à nos yeux, plutôt à la manière d’un désidératum – une idée normative et régulatrice par laquelle la raison chercherai à se guider dans ses procédures – que comme un but à atteindre à travers un programe faisable.

Notre comunication cherche à montrer que la motivation de fond qui promeut le renouvellement constant de cet idéal est plutôt d’ordre métaphysique que épistémologique ou même seulement logique, s’enracinant dans l’éxigence d’une intélligibilité intégrale du réel comme un tout, qui déjá inspire le projet grec d’une protê philosophia

L’idée de l’unité de la science témoigne ainsi de la profondeur du lien qui toujours rattache le questionnement des sciences à la philosophie, soit dans leur poursuite de coérence interne et articulation interdisciplinaire, soit dans l’ouverture ontologique et métaphysique toujours impliquée dans le sens et l’ampleur de leurs concepts et problématiques.


Paco Calvo
(
alvo (Department of Philosophy, University of Murcia, Spain)


The Symbiotic Phenomenon in the Evolutive Context

Francisco Carrapiço  
(Departamento de Biologia Vegetal, Faculdade de Ciências, Universidade de Lisboa, Portugal)

Symbiosis has frequently been considered as a biological curiosity, and not as a scientific concept, namely in the scope of evolution. Nevertheless, symbiosis is a widespread phenomenon, with great biological relevance, that has a fundamental role in the organization and evolution of life. Despite this fact, it has not received proper attention either from the scientific community, or from the university and high school curricula. This situation reflects itself in an interpretative reality of the biological evolution, based on two classical scientific theories: Darwinism and neo-Darwinism, the only ones that, traditionally, try to explain this process and where symbiosis is not adequately understood nor handled. For traditional evolutionist authors, symbiosis is nothing more than a residual aspect of the evolution problem. Recent data, however, point in the opposite direction, showing that symbiosis is a factor of evolutive change, which cannot be included in the framework of the neo-Darwinism theory. Questions such as the following ones should have a clear and scientific answer and should not be put aside simply because they do not correspond to the mainstream discourse. Why is the symbiotic phenomenon so widespread in nature? Why and how does it perpetuate itself through time? What is its degree of importance for the intervening organisms? Why do living beings present structural, physiological and/or behavioral changes so well adapted to this relationship? What is the role of the symbiosis in the evolutionary process? In this sense, symbiogenesis should be understood as an evolutive mechanism and symbiosis as the vehicle, through which that mechanism unfolds. This fact represents a point of view which is opposed to that of the neo-Darwinism or Modern Synthesis Theory. Thus, according to today's dominant theory, evolution is a gradual process essentially consisting of a natural selection conducted on minimal phenotypical variations. However, most living forms have symbiotic relationships with microorganisms, and so symbiosis seems to play a very important role in the origin, organization and life evolution. It is an important support for the acquisition of new genomes and new metabolic capacities, which drives living forms’ organization and evolution. In this sense, the evolutionary changes can be explained by an integrated cooperation between organisms, in which symbiosis acts, not as an exception, but as the dominant rule in nature. To put it in a nutshell, is the development of life a saltational symbiotic process or a gradual process, oriented by natural selection and leading organisms to adaptation? In a way, this dilemma may be found in two positions clearly expressed by Theodosius Dobzhansky and Jan Sapp. In 1973, Theodosius Dobzansky wrote his famous article entitled “Nothing in biology makes sense except in the light of evolution”, where he transmits the neo-Darwinist view of evolution. Thirty years after this article, Jan Sapp (2003) includes in his book, “Genesis: the Evolution of Biology”, a new paradigmatic concept: “Nothing in evolution makes sense except in the light of symbiosis”, giving shape to the new ideas on the process of evolution, involving symbiotic principles. Actually, and specially since 1859 - the year of the publication of Darwin's book “On the Origin of Species” - evolution has been considered as the fundamental concept and organizing factor of modern biology, as well as its structural pillar. Without denying many of the Darwinist principles, the worse thing we could do within the study of the process of evolution would be to mix up or limit evolution to the Darwinist or neo-Darwinist perspective. This leads to the erroneous idea that Darwinism, or neo-Darwinism, are the only synonyms for biological evolution. Other evolutionist approaches exist and it is necessary for them to be deepened and discussed within biological and social sciences. In this sense, we would like to bring a set of principles and data that could be integrated in a Symbiogenic Theory of Evolution that could contribute towards a new epistemological approach of the symbiotic phenomenon in the evolutive context. This, in our point of view, could be the beginning of a new paradigm in science that rests almost unexplored.


Pragmatic continuities in empirical science. Some examples from the history of astronomy

Maria de la Concepcion Caamano Allegre
(University of Stanford, USA)

The purpose of this work is to emphasize the importance of those pragmatic aspects of empirical science that contribute to its continuity. After the more traditional analysis of empirical science as a corpus of theories has faced serious difficulties in keeping the common intuition that scientific development is, to certain extant, cumulative in nature, and the incommensurability thesis dramatically challenged the truth-cumulative view of science, the source of scientific continuity remains unclear. Within the post-Kuhnian tradition, I. Hacking’s influential new experimentalism meant a first step towards a recognition of experimental practice as the key aspect determining scientific development. Nevertheless, the consideration of the pragmatic features of science that this recognition would demand is still tentative and disperse in the current literature in the Philosophy of Science. This work is intended as an attempt at identifying some of those features by examining the pragmatic continuities displayed by the development of astronomy, from the Babylonian period to Copernicus.     

The paper consists of two parts: the first provides a clarification of the pragmatic approach adopted, and the second offers an application of such approach to the historical examples. The basic assumption shared by the different pragmatic accounts of scientific development is that success in attaining certain goals justifies our beliefs about the world. However, there are alternative pragmatist accounts of the nature of such goals. Following Gerhard Schurz (The Role of Pragmatics in Contemporary Philosophy, Verlag Holder-Pichler-Tempsky, Vienna, 1998) in his classification of pragmatisms, the main ideas introduced here could be overall characterized as being in tune with the so called “pragmatism to the right”, or k(nowledge)-internal pragmatism (represented, among others, by C. S. Peirce, C.I. Lewis, the late H. Putnam, N. Rescher and G. Schurz), and sharply disagreeing with the “pragmatism to the left”, or k(nowledge)external pragmatism (represented, e. g., by W. James, F. C. S. Schiller, R. Rorty and S. P. Stich). According to the former, scientific practice would be guided by epistemic purposes, while, according to the latter, it would be guided by non-epistemic purposes. The analysis of the pragmatic components of knowledge pursued here focuses on the consideration of k-internal goals, since, as the examples from astronomy show, cognitive purposes (like observation, prediction, explanation, control, innovation, acquisition of true beliefs, etc.) are the primary ones leading to the effective production of empirically sound knowledge. In elucidating what kinds of knowledge and practices are more useful for producing empirically sound knowledge, it is argued that procedural knowledge, that is, knowledge about how to perform certain tasks, has proved not only specially effective but also specially cumulative in character. Both formal and empirical (k-internal) useful procedural knowledge exhibits a highly invariant character, as the astronomical case illustrates. The fact that Babylonian computations survived several scientific revolutions in astronomy is not usually emphasized in philosophical discussions. Yet Babylonian computations supplied the main source of empirical data in Ptolemy's Almagest, which in turn provided the main source of empirical data for Copernicus' De Revolutionibus (J. Evans, The History and Practice of Ancient Astronomy, Oxford University Press, Oxford, 1998). 

Finally, a further claim that the historical evidence offered in this work gives support to, is the mutual independence of truth and usefulness. The understanding of truth in terms of usefulness has been a constant theme in pragmatism (even on Nicholas Rescher's methodological pragmatism, where methodological usefulness is associated with truth-conduciveness). In this paper, on the contrary, it is argued that k-usefulness gives us a criterion of empirical soundness, having no necessary implication for truth. The continuities of empirical science are not explained here on the basis of truth, but just on the basis of k-internal usefulness.


Bosbach and the Riecan States on Pseudo-Residuated Lattices

Lavinia Ciungu 
(Mathematics Polytechnical University of Bucharest, Romania)

The states were first introduced for commutative structures of MV-algebras by D. Mundici in [2] and for BL-algebras by B. Riečan in [3]. In the case of noncommutative fuzzy structures, they were introduced by A. Dvurečenskij in [4] for pseudo-MV algebras, by G. Goergescu in [5]  for pseudo-BL algebras  and by A. Dvurečenskij and J. Rachůnek in [6] for bounded noncommutive Rl-monoids.  

In the case of pseudo-MV algebras, A. Dvurečenskij proved in [7] that any pseudo-MV algebra M is isomorphic to
Γ(G, u)={gєG/0≤g≤u}, where (G,u) is an l-group with strong unit u. It allowed him to define a partial addition +, where x+y is defined if x≤ y‾=u-y.
The other noncommutative structures don’t have such a group representation and it was more difficult to define the notion of states for these structures. In the case of pseudo-BL algebras, G. Georgescu proved that any Bosbach state is also a Riečan state, and he left as an open problem to find an example of Riečan state on a good pseudo-BL algebra which is not a Bosbach state. In the case of good bounded noncommutative Rl-monoids, Dvurečenskij gives an aswer to this problem proving that any Riečan state is at the same time a Bosbach state.  

Inspired by the above mention results, in this paper we extend the notion of states to pseudo-residuated lattices and the final results prove that in this case any Bosbach state is also a Riečan state, but the converse is not true, which answers to Georgescu’s open problem.  
We first give the definition a pseudo-residuated lattice and we prove the basic properties of this structure, also valid for other structures such as pseudo-BL algebras, weak pseudo BL-algebras and bounded noncommutative Rl-monoids. The distance functions are defined, being very important for proving some of the main results. Then, we define the Bosbach and Riečan states on pseudo-residuated lattices, we investigate their basic properties, proving that any Bosbach state is a Riečan state. As an aswer to Georgescu’s open problem, we give an example of a Riečan state on a good pseudo-residuated lattice which is not a Bosbach state.

References

[1] A. Dvurečenkij, S. Pulmannova, “New Trends in Quantum Structures”, Kluwer Acad. Oubl., Dordrecht, Ister Science, Bratislava, 2000.  
[2] D. Mundici, “Averaging the truth- value in Lukasiewicz sentential logic”, Studia Logica 55 (1995), 113-127.  
[3] B. Riečan, “On the probability on BL- algebras”, Acta Math. Nitra 4 (2000), 3-13.  
[4] A. Dvurečenkij, “States on pseudo- MV algebras”, Studia Logica 68 (2001), 301-327.  
[5] G. Georgescu, “Bosbach states on fuzzy structures”, Soft Computing 8 (2004), 217-230.  
[6] A. Dvurečenskij, J. Rachůnek, “On Riecan and Bosbach States for Bounded Noncommutative Rl- Monoids”, Preprint Series, Mathematical
Institute Slovak Academy of Sciences, 2005.  
[7] A. Dvurečenskij, “Pseudo- MV algebras are intervals in l- groups ”, J. Australian Math. Soc. 70 (2002), 427-44
5.


Felix Costa
(Departement of Mathematics, Instituto Superior Técnico, Lisboa, Portugal)


A Unified Account of Irrationality

Vasco Correia
(Université Paris IV, France)

According to formal accounts of rationality, which prevail in rational choice theories, we should limit ourselves to consider as irrational cases in which an agent acts or thinks counter to his own conception of what is reasonable, in other words cases in which the agent transgresses his own norms of rationality. The formal criterion of irrationality would then be the inner inconsistency in the agent’s mind, arising either between his actions and his decisions, at a practical level, or between two of his beliefs, at a cognitive level. The interest of such a formal account is that it does not prejudge of the agent’s moral principles or values, allowing for an approach of rationality independent of any moral assumptions. 
Under these conditions, self-deception and akrasia (or “weakness of will”) appear to be the paradigmatic forms of irrationality. Admittedly, the reason for this is that in both cases the agent seems to “conspire” against himself. The akratic agent is the one that deliberately acts contrary to what he judges to be his best interest, and the self-deceiver deliberately believes contrary to what he suspects to be the truth. In appearance, at least, this undermines the most common assumptions in theories of rationality: namely, that we tend to act upon what we judge better; and that we tend to believe what seems to be the truth. In an attempt to understand these paradoxes, many authors have proposed radical hypotheses, such as the partition of the mind, the existence of mental tropisms, and of course the existence of an unconscious. 
My first claim is that irrationality (both cognitive and practical) can be understood without assuming that the mind is divided. The key to this view is the suggestion that the cause of irrationality is neither a conscious intention of acting contrary to what the agent considers himself to be reasonable (Sartre, Davidson, Pears), nor the direct effect of an unconscious desire (Freud, Audi), but rather the influence of our emotions (desires, fears, hopes, etc.) over the process of belief formation. This claim is supported by the vast empirical evidence provided by social psychology in the recent decades, clearly showing that emotions indeed distort our beliefs in a variety of ways, namely by affecting the cognitive processes involved in the formation of belief (biased attention to evidence, biased interpretation of available evidence, selective memory, selective evidence gathering, etc.). Thus, although we seem unable to believe something “at will”, simply because we want to, it may sometimes happen that our motives surreptitiously lead us to believe what we wish were the case (wishful thinking), even if this means accepting a certain proposition in the teeth of evidence, which is precisely what self-deception is about.
But could such a cognitive model of irrationality be extended to the practical sphere and successfully explain weak-willed actions? My second claim is precisely that the influence our motives may exert over our judgments also accounts for genuine cases of practical irrationality. As some authors have recently pointed out (Eslter, Ainslie, Bird), and as Socrates seems to suggest in Plato’s Protagoras, weakness of will can be understood as a sort of «myopia» that seems to affect human beings in what concerns time preferences. Loosely speaking, the idea is simply that, other things being equal, we tend to prefer immediate small rewards to future greater rewards. For example, if one has to choose between a small early reward (e.g. eating dessert after a meal) and a greater future reward (e.g. loosing some weight), the early reward might appear to be bigger than the delayed and, in consequence, one ends up eating a dessert. Thus, the acratic action does not appear to stem from a so-called “weakness of the will”, but simply from an evaluation illusion, which leads the agent to miscalculate the expected value of each option and, consequently, to choose an option that doesn’t maximize his interest. This argument will then bring us to a more general conclusion, namely that practical irrationality stems from cognitive irrationality, and that they both result of the several ways in which our desires may bias our judgments. 


Argumentative Pluralism

Cédric Dégremont 
Laurent Keiff
(Université Lille III, France)

Logical pluralism has turned a standard in professional practice. With the exception of stances with (strong) philosophical motivation, it seems a fairly shared intuition that there is no unique acceptable logical theory. One can define the subject matter of logic as “cogent reasoning” (van Benthem [1]) and the principal tool to study it has been mathematical structures for more than a century. The power of mathematical methods allowed for the description and the theoretical control of countless logical systems, but at the same time converted fruitfulness into a problem. Which ones of the infinitely many logics that are available in the literature are principled and meaningful? How can one give an account of the intuitively sound claim that cogent reasoning cannot be just anything? Our proposal here is to seek in the theory of argumentation, and its mathematical representation in game theory, the principles of a logical pluralism under control.

Dialogical logic ([2], [3], [4]) provides a conceptual setting for this proposal. Building upon a game theoretical core, dialogical logic captures the meaning of the elementary logical notions as expressions of the features of a certain type of argumentation games where two players interact, asserting claims and challenging them. More precisely, usual dialogical games are two players zero-sum well founded games, where the first player (the Proponent (P)) puts forth a main claim (the thesis of the dialogue), while the other one (the Opponent (O)) tries to refute it. The truth of a formula is understood in terms of the ability for the player who asserted it to defend it against allowed criticisms. A formula is said to be dialogically valid if the Proponent has a winning strategy in the dialogue associated with it. The rules of dialogical games belong to two categories: particle rules, that define the local form of the game according to the main connective of the formula at stake, and structural rules that define the global organisation of the game. The later define the properties of the notion of validity : one changes the logic by changing the structural rules.

A crucial feature of dialogical systems is the so-called formal restriction. This structural rules states that P can assert an atom iff O conceded this atom first. This rule has a strong argumentation theoretic content: it amounts to say that a claim of P is valid if he is able to defend it using only arguments that are grounded on the assertions (or arguments) of O, in the sense that all the non-logical claims are first made by O. Thus the properties of a dialogical system relies essentially on the way P can use the information conceded by O. For instance, the differences between classical, intuitionnistic, (some) relevant and linear logic can all be expressed as constraints on the way P uses the conceded information.

Our point is thus the following. Let us define a game in which P is free to use the conceded information in a variety of ways, but with some cost defined for types of moves, according to the information flow they entail. Thus P can have a set of winning strategies in the game for a given thesis (i.e. strategies leading to a final position where O is to move and there is no legal move available to him), and the costs allow to compute the class of optimal strategies. Such strategies capture what one can see as the best ways to argue for the thesis, given some notion of what is a good argumentation which is coded in the costs of the moves. If the cost of the strategy is the cost of its most expansive move, then the cheaper strategy for a thesis indicates the class of logics in which the thesis is valid. A clear and easy implementation would give increasing costs to logics ordered by inclusion of their sets of theorems. Now if one allows for a computation of strategy cost based on the sum of the costs of the individual moves the picture gets much more intricate. At least it can be said that this possibility opens a whole field of future research on the application of decision theoretical methods and concepts in evaluating argumentative strategies.

References

[1] van Benthem, J. “Foreword” in Abduction and Induction, Flach P. & Kakas A. (eds.), Kluwer, Dordrecht,  2002.  
[2] Lorenz, K. & Lorenzen, P. Dialogische Logik, , Wissenschaftlische Buchgesellschaft, Darmstadt, 1978.  
[3] Rahman, S. Uber Dialogue, Protologische Categorien und andere Seltenheiten, Peter Lang, Frankfurt, 1993.  

[4] Rahman, S. & Keiff, L. “On how to be a dialogician” in Logic, Thought and Action, Vanderveken D. (ed.) Springer, Dordrecht, 2004.


Unity of Science by Integration: A Case Study

Luc Faucher
(Département de Philosophie de l'Université du Québec, Montreal, Canada)

The project of unification of science as spelled out by its logical empiricists has been mainly a philosophical project, that is, a project pursued mainly for philosophical reasons. Unity of science through reduction of language (Carnap, 1934; Hempel, 1934) or through reduction of theories (Putnam and Oppenheim, 1954) is motivated by monism, be it language monism (every theoretical term has to be reducible to physicalist terms) or theory monism (every theory has to be reducible to a lower level theory and ultimately to physical theories). This project of a global reduction has lost impetus with the wind of naturalism that has blown on philosophy of science in the last few decades. Philosophers of naturalistic bend have been more attentive to what scientists are actually doing than their predecessors and as a result they have been concerned by more local projects of unification. Indeed, when philosophers look at what scientists are really trying to do when they are talking of unification, they might find at least one of two following things: either scientists are trying to explain the phenomenon at one level by invoking some phenomenon at a lower level (by giving an explanation by local reduction, a mechanistic explanation, or some other forms of explanation) or they are trying to build a framework to integrate prima facie incompatible theories (local integration). It is the second of those projects that we are interested in.

Instead of trying to speculate abstractly about the conditions of unification, we will turn our attention to a precise case where there is a crying need for unification. One place where we identified such a need is in social sciences. In previous papers on racial cognition (Machery and Faucher, 2005, forthcoming), we have try to provide a framework, inspired by the work of Boyd and Richerson on gene-culture co-evolution, to integrate two approaches dominant in social sciences but seen as largely incompatible, that is the biological approach and the social constructivist approach. In this presentation, we would like to go further. Despite the fact that most scientists agree that there is no such thing as races in the world (or that races are not natural kinds), racial cognition has been the object of intense research in many different (scientific) fields: social psychology, evolutionary psychology, history, anthropology, neuroscience. The problem, as we see it, is that for methodological and disciplinary reasons, those fields have failed to interact with each other. The net result is that many of the insights within one field are ignored by the others and vice versa. This is a genuine problem (not only a ‘philosophical’ problem). Such a problem, so we claim, has to be solved in a bottom-up fashion, that is, by looking one by one at the obstacles in the way of integration and by trying to remove them. As we noted, we already provided a framework for the integration of the evolutionary psychology approaches and the social constructivist approaches that dominate the humanities. This time, we would like to look instead at the problems of integration encountered by neuroscience, evolutionary psychology and social psychology. In our mind, the efforts of integration should focus on social psychology because of its strategic position between neuroscience and evolutionary psychology. Indeed, the most interesting work of evolutionary psychology has been concerned with providing social psychology with a clearer picture of the cognitive architecture sustaining social phenomena (but also more realistic models of learning) while neuroscience has been using the data and paradigms of social psychology to pursue its fray into the brain mechanisms.


Théatre @ Science

Carlos Fragateiro
(Director of the National Theater  D. Maria II, and of the Teatro Trindade, Lisbon, Portugal)


Unity in Diversity

Marie Helène Gorisse
(Université Lille III, France)

Is it possible to formulate the features of a dialogue where different traditions try to articulate and respect their respective backgrounds and to find at the same time a common field of epistemological negotiations where knowledge is to be won?  

Instead of directly answering such a question we will more modestly study the response which we think follows from the Jaina reflections on the dia-logic of an epistemological negotiation such as the one already mentioned. More precisely, we present two possible interpretations of the logic of knowledge of the Jainas and of their general claim according to which all reasoning is a conditioned reasoning.

Jaina philosophy shares with Buddhism and Hinduism the aim of striving for absolute liberation from the factors which bind human existence. For the philosophical systems of Indian thought, ignorance (of one’s own nature, of the nature of the world and of one’s role in the world) is one of the chief such factors, and Jainism offers its own insights into what constitutes the knowledge that has the soteriological function of overcoming ignorance.  
From this, they develop, from the ninth up to the thirteenth century, an epistemology of omniscience, which is a tentative of epistemological closure in the sense that it amounts to the integration of all different possible perspectives on the object of knowledge. The Jaina interest in logic arose through a consideration of inference as a mode of knowledge. Developing in an epistemological background rich with its scientific debates, the pluralistic logic of the Jainas therefore aims to harmonise the different scientific theories. For this, they performed a kind of modal logic able to articulate several standpoints. In this conception, the role of logic, named the syâd-vâda (‘the doctrine of the “somehow”’ or ‘the doctrine of the modes of predication’), is to regulate the epistemological and inferential variation. 

Our purpose is to use the modern tools in order to be aware of the actual tenets of their logical and epistemological dynamic. We will perform this interpretation on the basis of three texts, which are central for the Jaina philosophy of logic, namely the Nyâyâvatâra of Divâkara, fifth century; the Âptamîmâmsâ of Samantabhadra, seventh century; and the Prameyakamalamârtanda of  Prabhacandra, ninth century. One approach which we will call the external point of view will assume the frame of non-normal modal dialogic which is a game where the role of both of the two players is precisely to make explicit the presuppositions of the theorical assertions of the other player, and where not even a necessary proposition such as "a=a" is assumed to have a universal scope. To accomplish the formal task of conceiving the integration of different theorical presuppositions, we will make use of hybrid languages, which are means of extending the language of logic with mechanisms for referring to states. The second approach, namely the internal point of view, will tackle the question of the dynamics of making two different points of view compatible. At this point the technical language we make use of is the one of adaptive logic. More precisely, the philosophical point in the developpement of adaptive logics applied to the epistemology of the Jainas is that, as we are not omniscients, we can never be sure that something that our theory would define as a problem won’t appear in our set of derived conclusions. We therefore have to develop a dynamical logic able to deal with such problems whenever they appear so as to prevent the whole theory of being trivial and useless.


  Theory Change - a metalogical view

Reinhard Kahle
 (Departamnento de Matemática, Universidade de Coimbra, Portugal)


In this talk we address the problem of meaning of theoretical terms in the process of theory change. In contrast to traditional approaches we adopt a strict proof-theoretic view in the determination of meaning. Our approach makes part of a general programme to analyze
intensional phenomena from a proof-theoretic perspective. It belongs to the field of proof-theoretic semantics, in its broad sense.

We consider theories - physical, mathematical or whatsoever - given as axiomatic systems. The particular derivation system (Hilbert-style, Gentzen-style or natural deduction) should not be of relevance for the following. To fix a common ground we will restrict ourselves to systems over a first-order language (however, in principle, our approach should be extensible to any other kind of formal language).

Give a first-order axiom system we adopt the view that this system implicitly defines the meaning of the non-logical terms used in it.

This notion of implicit definition allow of (quite far reaching) underspecification of the meaning of the terms, in contrast to the traditional semantic view which tries to provide concrete denotations in a semantical universe. In the axiomatic approach the meaning of a term is given only by its relation to the other terms. This view is illustrated best in David Hilbert's famous note on the axiomatization of geometry saying that this axiomatization would speak also about tables, chairs and beer mugs as long as they would relate to each other as points, straight lines, and planes.

Given an axiomatic system - coming along with its implicit characterizations of the terms used in it - we can consider the following four different forms of theory change:
1) Specification: The meaning of a term is narrowed. A prime example is the addition of an axiom about the term.
2) Liberalization: The meaning of a term is widened. An example is the extension of the domain of a quantification in the axiom system.
3) Modification: The meaning of a term is modified. We consider a modification the change of one axiom about a term while the bulk of
axioms stays unchanged.
4) Revolution: The meaning of several terms is altered. In this case, the whole axiom system is modified in a substantial way.

This list is not intended to be complete, it rather illustrates how the different forms of theory change can be characterized in our
framework.

As appealing as this illustration might be, the real challenge for an approach of theory change is still pending: How can a new theory
be related to an old one, if the terms have different meanings.

Except of the case of revolutions we will deal with this question by single out those axioms which the old and new theory have in
common. If these axioms still provide a sufficient specific implicit definition of the term in question - "sufficient specific" understood
in a way that representatives of the old and new theory can agree on the meaning on this common ground - the analysis of the change can be done base on this implicit definition.

It is worth noting that this analysis of smooth changes is not necessarily transitive. Even if the change of the meaning of a term from an axiom systems A to B, and a further change from B to C can be analyzed in this way, it might be that a direct change from A to C is not analyzable since it lacks a sufficient common ground.

In a more general perspective the proof-theoretic analysis provides us with additional structure which is lost in a theory given by a raw set
of propositions. It is this additional structure which provides us with extremely useful information to analyze all kind of intensional phenomena, in particular, if we take the change of axiom systems as a meta-operation in consideration.


 

Sandeep Kumar Ganji 
(Hyderabad, India)


Unity of Science against the Rational Construction of Knowledge

Catherine Laurent
(INRA, Paris, France)


The Structure of Styles

César Lorenzano
(Universidad Nacional de Tres de Febrero, Buenos Aires, Argentina)

In 1913, Enrique Wölfflin publishes a book - Principles of Art History: The Problem of the Development of Style in Later Art- that from that moment became an obliged reference to the most important theoretician and historians of the art. History of art as history of stiles –Wölfflin’s proposal- was accepted by authors like E. Gombrich (1984), E. Panovsky, or A. Hausser as a presupposed knowledge to their own developments. 

In this article it will reconstruct that theory following an amended version of the structuralist conception of theories, so as to make it easier to understand for those who are not familiarized with its jargon, or with the peculiarities of the theory of models and of groups. Besides the expositive reasons, there are ontological and epistemic reason to do so.

I will begin exposing the theory of styles by means of a hypothetical narration, but not for that reason unfaithful, of the genetic method that took to their formulation, in the supposition 
When coming this way, it is introduced in a natural way the strategy of reconstructing it starting from their copies 
Finally, diagrams will be used to show the elements and relationships of their texture, leaving aside the language of the theory of models and of groups. 
Several reasons exist to analyze the theory that Wölffflin proposes, beyond its acceptance for the community of theoretical and historians of the art, and of its validity to almost a hundred years of its approach. 
In the first place, the intrinsic interest of the history of the art like field of legitimate knowledge, habitually careless for the philosophy of the science We will make notice that the history of the art is, together with the history of the science, one of the first differential histories that obtain institutional genuineness, becoming independent of what we could call "the great history", that story of the power and its sways. 
As we will verify later on, the history of the art possesses explanatory theories It is so much of a theory about the art and their stylistic texture, as of the history of the art and their evolutionary stages that it presents a narrow parallelism with theories of the history of the science. A barely noticed, given phenomenon the separation among the communities of investigators in both disciplines. 
Be possibly the first of a long family of approaches that you/they incorporate to the texture of the story the architectural properties of that that history. When we specify the theory of the styles of Wölfflin, the likeness will be noticed that possesses with Thomas' approaches Kuhn and Ludwik Fleck 


Feelings in face of Unity ofScience

Elisa Maia  
Isabel Serra  
(CICTSUL / Faculdade de Ciências,  Universidade de Lisboa, Portugal)

This paper explores, on the one hand, the impact of the status of the unity of science in the emotional processes that format scientific practice and, on the other hand, how certain feelings grant a sense of unity to the practice of sciences.

First, we draw a scenario where we show simplistically the development of the idea of unity of science. We characterize the different moments of the philosophical perspective about the unity of science by choosing certain examples from the history of science to illustrate the different periods. Then, we describe the emotional identity of the different historical periods identified by identifying the mood of the historical period as well as the emotional character of the scientist of the various examples given. 

Second, building on previous research, we introduce the triad “love, faith and hope” as fundamental emotions of scientific practice. We characterize such emotions as complex by explaining how these feelings should be seen as the result of complex response/meta-response emotional processes. Then, we show how the mark of surprise in scientific discovery can also be understood as a complex emotional whole. Finally, we characterize the triad of emotions for the different periods identified aiming at showing how the triad has changed and developed and, simultaneously, how it seems to have remained the same.

We establish a conclusion in two parts. First, we summarize the insights of reflection showing how the unity of science both mirrors and formats the sense of unity of science. Second, we lay down some of the consequences of taking into consideration the emotional identity of the unity of science by tracing its impact in the interpretation of history of science, and how it may serve as a guide to critically evaluate the current practice of science as well as education for science.


Great Believes in Scientific Thinking

Dina Mendonça  
(Instituto de Filosofia da Linguagem, Universidade Nova de Lisboa, Portugal)

Isabel Serra  
(CICTSUL / Faculdade de Ciências,  Universidade de Lisboa, Portugal)

In this talk we will bring to light the role of faith in beliefs in the development of the scientific knowledge and also present, in a systematic fashion, its presence in various scientific fields in its various historical moments.

Since the Ancient times faith in certain beliefs has conducted and unified the scientific thinking by permeating all of science. One of the unifying beliefs is the one about the harmony of the Universe. The impact of the faith on this belief is so overpowering that its influence can be seen even on the perspective upon mathematics: the beauty and elegance of mathematical expressions is supposedly an undeniable evidence of the harmonic nature of the Universe. However, sometimes faith in certain believes causes controversy and, on occasions, even violent arguments among scientists, as it is seen with the reductionism in biology or the vitalism in chemistry.

No matter what the impact of believes is, it is certain that they have, both in an open fashion as well as hidden one, contributed to create and maintain the search for knowledge in the different areas, conditioning not only the development of sciences as well as the teaching of the same scientific fields. The paper will use certain examples to explore in which ways the force of believes formats scientific sense of unity. 


The XXth Century Physics and the Unity of Science

Rui Moreira
(Faculdade de Ciências, Universidade de Lisboa, Portugal)

Niels Bohr was the responsible for the introduction of the complementarity principle in physics. This attempt pretended to stress the existence of an irreducible irrational residuum that, following Harald Høffding, the professor of philosophy of Niels Bohr, had emerged in the domain of psychology. He himself tried to extend it to the domain of philosophy. What Niels Bohr has done was to extend this irreducible irrational residuum to the domain of physics. This was an attempt of a kind of reductionism of physics and philosophy to psychology. In fact, following Harald Høffding, every kind of thought should be psychologically possible. The laws we find in the domain of psychology cannot be violated in every level of the human thought activity.


Metaphysics of Impossibility and Unit of Science

Nuno Nabais 
(Departamento de Filosofia, Universidade de Lisboa, Portugal)


Sciences as Open Systems

Vitor Neves
(Faculdade de Economia, Universidade de Coimbra, Portugal)

Utopia is in the horizon.
I get two steps closer, and it moves two steps away. I walk ten
steps and it is ten steps further away.
No matter how much I walk, it will never be reached.
So what is utopia for? For this it serves: to walk.

Eduardo Galeano 

Does it make any sense to pursue unity of science, after the failure of the Vienna Circle’s project of unification of sciences, and in an era in which the growth of knowledge continues to be primarily the product of specialized research (even if many important scientific advances have always come up as the result of conceptual and methodological borrowings and spill-overs across sciences)? The idea of unity of science seems to face insurmountable difficulties. Not only the possibility of unification of different sciences is widely regarded as a lost cause, lack of unity is also the most common situation within disciplines. It is against this backdrop that I will argue in this paper for unity in diversity, claiming that Science is plural, but that the utopia of unity should be kept at the back of one’s mind, even if just as an aspiration, an unreachable ideal which leads us to transgress disciplinary boundaries (in an overall context of “dialogical encounter” across disciplines), and which keeps us looking for the connections and the totality.

My point is that, against the polar contrast between ever more specialized, fragmentary (closed) sciences and the reductionist projects of unity of science, sciences are to be construed (and developed) as theoretical open systems, with emergent properties irreducible to those of any one of them or to whatever basic characteristic, language or method/logic of inquiry one may consider. By an open system I mean a structure with connections (as any other system) where, in contrast with a closed one, boundaries are semi-permeable and mutable – this way enabling many and various in-fluxes as well as out-fluxes and contamination from other systems – and the constituent components as well as the structure of their interrelationships are not predetermined (a more detailed elaboration will be provided in the paper, in which the works on the meaning of open systems of Sheila Dow and Victoria Chick, e.g. Dow, 1997 and 2002; Chick and Dow, 2005, are central). Emergence, in turn, is meant here as a basic feature of a stratified, multi-layered (understanding of) reality. A stratum (an entity or aspect) of reality is said to be emergent, or to possess emergent properties, “if there is a sense in which it has arisen out of some ‘lower’ level, being conditioned by and dependent upon, but not predictable from, the properties found at the lower level.” (Lawson, 1997: 176), thus turning the higher-level stratum irreducible to the lower-level one. 

In order to illustrate and discuss the above mentioned tension between specialization and unification, and how the alternative view of Sciences as Open Systems may make a difference, attention will be given to Economics (and to its relations with other social sciences). Two issues, in particular, will be explored and the contrasting views of mainstream economics and the “Economics as Social Theory” project highlighted: (1) the language of mathematics (overwhelmingly thought as the required language / instrument of reasoning in Economics) and (2) the rational choice model and the economics imperialist’s claim that it “provides the most promising basis presently available for a unified approach to the analysis of the social world by scholars from different social sciences” (Becker, 1993).

 

 

Direct abduction through dual resolution
 
Angel Nepomuceno
Fernando Soler 
(Departamento de Filosofía y Lógica y Filosofía de la Ciencia, Universidad de Sevilla, Spain)



In the last years a debate over the characterisation of abduction as the logic of discovery has arisen in Philosophy of Science (Paavola, 2004). There are many positions in defence of abduction (Hintikka, 1998), but all of them impose essential requirements to consider abduction as the logic of discovery. 

On the other hand, parallel to this discussion, many logical and computational approaches to abduction have been developed (Kakas et al., 1998). Most of them use standard deductive calculi to attack abductive problems (Mayer and Pirri, 1993). Informally, if T is a set of formulae (which plays the role of a theory) and F a formula (the surprising observed fact), then the pair (T,F) constitutes an abductive problem whenever F does not follow (is not a logical consequence) from T. Then, E (a formula which represent the explanation) is a solution for (T,F) if T+E entail F. It is habitual the addition of more constraints to the abductive solutions. But in classical logic “T+E entail F” is equivalent to “T+(not F) entail (not E)”. When using a deductive calculus for abduction, given the abductive problem (T,F), one obtains conclusions from T+(not F) and then each of their negations is a possible explanation.

But proceeding in this way is somehow similar to reductio ad absurdum, because the abductive search starts precisely from the background theory with the negation of the observed fact. We think this is one of the causes which most contribute to move logical abduction further away from the great expectations coming from Philosophy of Science and Epistemology. It is hardly believable that to explain F one starts precisely from the negation of F. Actually, F is the only empirical datum on the abductive problem. Though every abductive solution E obtained in this way is a correct one, it is not possible to take these procedures as a logical model neither of scientific discovery nor of common sense reasoning. 

We present a logical calculus, delta-resolution (Soler et al., 2006), proposed to attack abductive problems in a more direct way than the common approaches. It is dual to classical resolution, and its starting point is the classical equivalence of “T+E entail F” with “E entails (T implies F)”, so one does not have to deny the empirical data but to look for hypotheses E which fill the gap in T in order to explain F. The definitions and properties of delta-resolution are dual to those of the standard resolution calculus. A procedure to produce consistent explanatory abduction by delta-resolution is proposed following these steps: theory analysis, observation analysis, refutation search and finally explanations search. Finally, we study the connections between them and some ideas coming from the Philosophy of Science.

References
 Hintikka, J., What is abduction? The fundamental problem of contemporary epistemology. Transactions of the Charles S. Peirce Society, 34(3):503-533, 1998.
 Kakas, A., Kowalski, R., Toni, F., The role of abduction in logic programming. Handbook of logic in Artificial Intelligence and Logic Programming, pages 235-324. Oxford University, 1998.
 Mayer, M.C, Pirri, F., First order abduction via tableau and sequent calculi. Bulletin of the IGPL, 1:99-117, 1993.
 Paavola, S., Abduction as a logic and methodology of discovery: the importance of strategies. Foundations of Science, 9(3):267-283, 2004.
 Soler, F., Nepomuceno, A., Aliseda, A., Model-based abduction via dual resolution. Logic Journal of the IGPL, 14(2), 2006.

 


Relevant Metaphysics of Causation

Marcus Romberg 
(Vastergatan, Sweden)

Causal reasoning provides man with tools, which practically save, and take, lives every day. During the past 40 years, quite a few theories of causation have been presented. Some of them have survived thanks to their problem solving abilities. Some of them survived because they have created new philosophical fields. Despite the unquestionable, everyday-value of the concept, some philosophers do propagate for an anti-realist view on causation. Don’t we have enough evidence for the view that the world is a causal construction? Or do we, strictly, just have evidence for the relevance of a causal attitude towards the construction we refer to as “nature”? I believe the latter. I believe that causation is a second order characteristic of the world rather than a fundamental or primitive property. To argue for such a statement, one has to be prepared to present some intelligible arguments for it. This paper is a defence for such a view. 

A Dualistic Account
I share my view on relations with R.Carnap, and argue that it is needed to answer two separate questions to formulate necessary and sufficient conditions for a relation to be considered as causal.
1. What is the correlation between the connected objects (cause and effect)? And 
2. What is the nature, or essence, and the ontological status of supposed causal connectedness? 

By separating the questions it is possible to reach a solution that does not imply conflicts between causally and randomly dependent processes, which I think is of a great scientific importance. Considering e.g. quantum mechanics, where it normally is considered an unsolvable enigma that what is described as just a collection of non-dependent probable distribution of quanta constantly can, and in fact does, manifest in our world as the very same marble, brown table or hydrogen atom. It is indeed interesting, how probability distributed quanta, described as wave-functions, in micro-cosmos manifest in macro-cosmos as a causal, natural law governed physical reality.

Differentiability
To reach my point, I will as an illustrative aid use mathematical functions and their graphs. One “well-behaved” sine function and one “pathological” modified Weierstrass, function. What makes this pathological function significant is that, similar to a fractal, it has uniform and infinite complexity no matter how closely one "zooms in" to view the image. For this reason, curves do not appear to become smoother as one "zooms in", thus no tangent can be equated to the graph at any one point, thus the function is nowhere differentiable. One can imagine circumstances under which it is not unreasonable to assume that a physical manifestation of a pathological function might give raise to observations that gives evidence for assuming a well-behaved function as being the law-like approximation. E.g. the function f(x)=bsinx gives a fairly good approximation to a physical manifestation of the pathological function above. It is clear that the well-behaved approximation has got different second-order characteristics than has the pathological function. 

I argue that our concept of causation is closely related to a second-order characteristic of the well-behaved function that approximately describes the process in question. Hence, causation can be considered a relevant feature of our scientific approximation, but not reflecting the absolute functional dependency. Random is equivalent to non-causal, or non- differentiable. The view presented makes it intelligible how, even in fundamental physics, random processes still might manifest in a causal way. Or to put it another way: It explains how randomized distribution on a lower level can express as causal distribution on a higher level. And, furthermore, it explains how randomness not necessarily leads us to an unpredictable nature.


The Flat Analysis of Properties and the Unity of Science

Hossein Sheykh Razaee
(Department of Philosophy, University of Durham, UK)

Heil (2003) presented a flat analysis of properties, composed of two theses. According to the first, which rejects the dominant the layered picture of reality, there is only one flat level of properties, and any two objects sharing a property are exactly similar in the respect of that property. There are not multiply realizable properties. There are multiply realizable predicates designating sets of similar (not exactly similar) properties. The second thesis expresses an identity theory: properties are simultaneously dispositional and qualitative.In this paper, I will argue that the first thesis entails a version of the unity of science. 
Three components can be distinguished in any model of the unity of science. The first concerns the aspect in which, according to the model, scientific theories are unified (in my model the content of laws). The second element concerns the strategy that by following it the unity of science in the alleged respect can be shown (in my model the flat analysis of properties and realization). Finally, the third element concerns the generality of the model. This element indicates that the model covers which theories (in my model special sciences with multiply realizable predicates), and is silent about others. 
Analyzing a special-science law that connects two multiply realizable predicates (P → Q), according to the flat view, reveals that the antecedent and consequent of this law designate sets of similar properties: pis are similar properties designated by P, and qis by Q. 
Prima facie, the content of this law can be analyzed into three elements: (I) a similarity relation among pi properties: p1 ≈ p2 ≈ p3 …, (II) a set of fundamental laws: {pi → qi}, and (III) a similarity relation among qi properties: q1 ≈ q2 ≈ q3. 
First, I will argue that the third component is expectable, if not deducible, from the conjunction of (I) and (II). Then, it will be shown that a special-science law is not so rich that enumerates an endless set of possible and actual similar properties, and an endless set of fundamental laws. Instead, we normally know a few examples of similar pis (say p1…pn) that we had experienced them, and their corresponding fundamental laws (say p1 → q1… pn → qn). However, restricting the content of a projectable special-science law to claims about a few experienced properties contradicts with projectibility of the law. We need an additional clause to guarantee that the law is applicable to new samples. 
‘The Similarity Principle’ solves this problem: under the same circumstances, two similar properties, which in fact have similar dispositionalities, bring about similar results. This principle is a conceptual truth about the notion of similarity and its truth stems from the definition of similarity. By accepting this principle, the content of the special-science law can be analyzed as follows: (I*) a fundamental law expressing that there is a nomological relation between p1 and q1 (or any other particular pair of pi and qi), and (II*) The Similarity Principle: under the same circumstances, any property similar to p1 (say pj), brings about a property (say qj) similar to what p1 brings about (q1).
Now it can be seen that what a special-science law says is in fact a fundamental law plus a conceptual truth about the nature of similarity that is common between all special-science laws. Therefore, there is a unity between the content of special-science laws and the content of fundamental laws. 

Reference:

Heil, J. 2003. From an Ontological Point of View, Oxford: Clarendon Press.


The Science between bi and tri-dimensional Representation

Dan Simbotin 
(
Social and Economic Research Institute, Romanian Academy, Iasi, Romenia)

The representation of science until Thomas Khun is simplistic. We can follow the argumentation specific to linearly causal logic, and we can identify the passage from P to C, (where P represents the premises and C the conclusion). This implies the reduction or the reordering of elements identifiable in P, differently ordered in C. We do not intend to make here a Baconian criticism of classical logic, but the explanatory capacity of such inferential structures, irrespective of how complex P or C is, is limited. This has also been identified upon the applicability of causal logic to the demands of quantum physics. The transfer from causal to numeric determines us to think about the necessity to identify non-causal representation mechanisms.

The theory of Thomas Khun describes the science like imagines of the world and it is ones of most known possibilities of interpreting the science. We consider every scientific paradigms like a imagine of world, sum of independent imagines developed by theories and scientific law. In this context we developed a new method of represent the scientific method using a matrix. We call this, matrix of imagine, and it is composed by independent imagines whose was in interrelation conform gestald principle. So we can present the science like a bi-dimensional matrix organisation.  

i11 i12 ……….i1m
i21………x……y

                z
in1
 =  I (general image) 

The elements composing the matrix noted with in,m represent, the particular image reflected by a scientific law or theory. The number of these images inside a matrix is infinite, but limited: infinite, because the number of images composing the matrix is in a continuous transformation, becoming and numerical development and, at the same time, limited, because the general image is known and this limit cannot be exceeded. Beyond the knowledge whose image exists, inside the matrix there exist as well unknown elements noted with x, y, z. These are identified by association with the general image and with the other elements of the matrix. The unknown elements inside the matrix are discovered by the association between images on the basis of Gestalt principles: Proximity, Similarity, Continuity, Closure, Figure and Ground.

Every scientific discovery is an identify operation of the unknowns in to a matrix known. This is the operation of discovery into sciences who Thomas Khun called “normal science”. In to “revolutionary sciences” the operation is one of reconstruction of a matrix by new criteria. 

i11 i12 …..…….i1m
i
21………x………


 
ip,q  =  i’11…y’………….
i’21……z’……….


=  I’

Every science has a bidimensional matrix correspondent for her paradigm. But when we consider the science in to a holistic vision, the matrix is tridimensional and the relationship between all imagines is necessary to respect the Gestald principle too. In this case we identify the difference between science in to firsts year of modernity, when the science was a “mathesis universalis” – in this case the matrix of all the science is bidimentional, and contemporary ones when every science have herself paradigm and in the vision of transcurricular aspects, the matrix is tridimentional. In this context the true is percept in different mode. We can tell a bidimentional and a tridimentional true for every kind of science.


Appearance or Existence of the Entity Realism “Sense” or Mind

A. Yazdani
(Physics Department, Faculty of Basic Sciences - Tarbiat Modares, University Tehran, Iran)

Two limiting boundaries exist in the domain of physics. In one, the prime task of fundamental physics is to understand the objects, laws, or what ever which is the basis of all observed phenomena. In the other there is a rhythm and pattern between the phenomena of nature which is not apparent to the eye, but to the conscious mind of analyzer. On the other hand the development of scientific techniques has increased our ability of detailed observation and so the complexity of formalism. Consequently the concept of basic conditions and the character of physical laws as the fundamental principle of physics should be clear out. It should be thought that, how some one can find a formalism to understand the phenomena or/and conduct an experiment, and completely at the opposite side, how and why some one forget themselves or even run away in each point of view?
What does the concept mean? What are the phenomena and under what conditions they exist? What is the truth about them? What is the correspondence principle of mind and phenomena? What should be the nature of space, related functions (or equations) and formalism in which the natural phenomena could be considered. How ever, suppose we were able to find a theory which explains all observed (or existing) phenomena but what are the basic processes through which some one can construct a behavior and character of essential concept of a phenomena? There should be conscious mind to be able to speculate the relation between the basic conditions, physical parameters and even physical principles.
Experiments on the progress of modern science signify its penetration into the wide range of mysterious research in this century. Principles of symmetry played little explicit role in the theoretical physics. The everyday definition of symmetry is; 
I. Having an exact correspondence in size or shape between opposite sides of a structure, or 
II. The regularity between parts of exchange where the both sides look exactly the same. The generalized crystallography closely related to the theory of symmetry applied with the success concepts of packing, order and structural hierarch to discuss the complex natural structures. 
The ever growing attention is attributed to a special type of the hierarchical ordering on the fractal structures and on the other hand, the existence of symmetric relation among cognitive capacities. In addition, the topological and symmetrical properties of the space and time, gauge invariance, act in physical space for conservation law of the surrounding world.
Thus the symmetry, systematicity, regularity and the ordering parameter seems to be absolutely fascinating to the human mind, as the conscious mind should be. This should be defined as the correspondence principle of geometrical of conscious mind and geometry of space. In this case some one have a feeling or imagination about the physical laws of nature which is very close to the feeling of corresponding symmetry of objects and mind, namely the symmetry of the laws. 


The Principle of Eurhythmy. A Key to the Unity of Physics

José Croca 
(Faculdade de Ciências, Universidade de Lisboa, Portugal)

The aim of Physics has always been the Unity. This ideal means that physicists look for a very basic principle from which it would, at last in principle, be possible to derive the laws for describing the physical reality at different scales of observation. Presently in physics we are faced with two independent, even opposite, theories valid at different scales. The classical physics, and the quantum physics. Classical physics good at the macroscopic scale while quantum physics is valid at the atomic and molecular level. The unification of the different physical laws, from classical physics, quantum physics to gravitic physics seems now possible. The key for this unity is the principle of Eurhythmy. The word comes from the Greek meaning, literally, the principle of the adequate path. It will be shown that Hieron´s principle of minimum path, Fermat´s principle of minimum time, basic for the classical physics, and de Broglie´s guiding principle, a cornerstone for the nonlinear causal quantum physics, are no more than mere particular cases of the principle of eurhythmy. Furthermore, it will be seen, with concrete examples, from classical physics, quantum physics to gravitic physics, that all these different branches of physics, can be unified and understood in a causal way as particular cases of this general principle. This principle allows us to see the diverse aspects at different scales of observation of the physical reality as a single coherent causal and beautiful whole.


Bibliography

J.R. Croca, Towards a Nonlinear Quantum Physics, World Scientific, London, 2003.


 


More information regarding this Colloquium may be obtained from the website

http://cfcul.fc.ul.pt/coloquioscentro/coll.unidade_cc.htm