Unity
of Science. From
the Idea to the Configurations
Olga
Pombo
(Faculty of Sciences, Lisbon University, Portugal)
Unity of Science is both a regulative idea and a task. That is why it has been grasped through the most extreme metaphors of an invisible totality and has gave rise to several epistemological programs and intellectual mouvements. However, before to mount up to such exemplary issues, I will pay attention to the deep, institutional configurations of Unity of Science (Library, Museum, "République des Savants", School and Encyclopaedia) and to their polyedric articulations. More than a game of complementarities, what seems to be interesting is to show that their strucured relationship is endowed with important descriptive and normative capacity.
Dealing with Uncertainty in Modern Science
Dinis
Pestana
(Philosophy Department, Lisbon University, Portugal)
For a while, scientists tried to alienate
Heraclitus legacy, that contingence and ephemerety are at the core of reality.
But quantification has been Science’s way, and metrological issues have
brought to the forefront errors in measurements and the protagonism of
uncertainty, and by 1920 Pólya christened the asymptotic results on the
“normal” approximation as the central
limit theorem, recognizing that it is the ultimate weapon to measure with
the accuracy we need, insofar as we can pay for a long run of measurements. This
isn’t but one well-known instance where composing with uncertainty pays much
better dividends than trying to avoid it.
Since their appearance as branches of Mathematics,
Probability and Statistics have been part of the toolbox used by scientists in
all fields to cut through the deep complexity of data, accompanying and
sometimes preceding the advancement of science. The total
probability theorem is, in fact, Descartes’ method of dealing with a
complex problem, splitting it in simpler sub-problems to work out a solution as
the blending of partial solutions of these sub-problems; and three centuries
latter, Fisher’s analysis of variance is a brilliant and pathbreaking example
that Descartes’ method can be inappropriate, that some problems must be solved
as a whole, and cannot be splitted out in sub-problems. Fisher’s work also has
shown that science had to move from observational gathering of data to the
production of relevant data, with the new discipline he invented, Design
of Experiments, since this way of dealing with information is much more
rewarding in knowledge building.
In fact, information is important insofar as it is
at the root of knowledge building — and
Probability and Statistics have a large share in the toolbox of methodologies
that allow us to extract knowledge from the available data. In a sense,
Probability and Statistics are our resources in taming uncertainty, sometimes in
using its patterns as a source of knowledge by itself. Unfortunately, the formal
training of scientists relies much more on the ability of dealing with ad hoc
techniques than on deep understanding of the principles involved in statistical
thinking. There is far too much “random search” of significant results, with
few critical appraisal of the information needed to achieve conclusions, proper
consideration of confounding concerns, and poor understanding of the essential
role of repeatability in the experimental method.
We discuss the importance of planning experiments, issues on appropriate sample sizes, and concerns arising in metha-analytical methods, as well as the limits of of statistical tools in the construction of science from empirical evidence.
More information regarding
this Colloquium may be obtained from the website
http://cfcul.fc.ul.pt/coloquioscentro/coll.unidade_cc.htm