Take a look at the Recent articles

Why do we foster and grant wrong innovative scientific methods? The Neuroscientific Challenge

Jordi Vallverdu

Department of Philosophy, Room B7/104, 08193 Bellaterra (BCN), Catalonia, Spain

E-mail : aa

DOI: 10.15761/JPP.1000114

Article
Article Info
Author Info
Figures & Data

Abstract

National or intra-national research programs (like FP7 or H2020, in EU zone) foster and reinforce specific ways to do research, selecting the best proposals to implement next researches thanks the provided funds. Very frequently, these programs are based on beliefs about the reliability and best chances of becoming successful researches. We analyse how EU flagship Human Brain Project (henceforth, HBP) was overestimated as a successful innovative project and, on the other side, how other innovative ways to work on neuroscientific research (fMRI, statistical data analysis) have led the discipline to important dead-end epistemological results. Other flaws into new methodological implementations offer an insight to the complexity of research field advances. The keys of the guidance and orientation in scientific innovation are, thus, revised under the light of these phenomena.

Key words

neuroscience, innovation, grant, research funding

Introduction

Scientific Innovators face with several problems because of the novelty of their ideas or related methods. Sometimes, to be innovative is a problem for the researcher who is addressing new results into the academic community. At the same time, the official fostering and support to innovative methods can produce disturbances of epistemic problems into the host communities. Anyhow, it looks clear that there are several ways to achieve scientific innovation despite of the still dominating linear model (which postulates that innovation starts with basic research, is followed by applied research and development, and ends with production and diffusion [1]. During the process of scientific evolution of the disciplines (we do not need to consider the extreme case of paradigm shift [2], for our current purposes), several strategies are possible [3,4]. But even in that case and following to [5] affirming that discovery in genuine sense is something that is not susceptible to conceptual analysis, we think that these processes are basically complex refer to several heuristics [6,7], as well as diverse strategies [8,3]. Besides, neurosciences are in fashion [9], and multiple academic disciplines try to create bonds with them. At the end epistemic communities are competitive and try to be organized in order to facilitate knowledge advancement and creation [10-14]. These processes are now affecting and modeling neuroscientific researches, and an updated epistemological review is required [15].

This paper discusses about two related things: a) how new methods of achieving innovative results in (neuro)science can be erroneously supported [4], and b) how new methods of achieving innovative results in science can be source of epistemic opacity and excessive complexity (in order to evaluate them). Following the answer to these two aims, we will answer to them using the same research field: neuroscience [16]. Trying to answer to the first question we will talk about Human Brain Project, while on the second, we will study fMRI, and statistical analysis of research data.

Human Brain Project: innovation by mistaken beliefs into methodologies.

The HBP started on 1 October 2013, as one of the leading European Commission Future and Emerging Technologies Flagship projects, and it is scheduled to run for ten years (2013-2023). The estimated full funding for HBP is of 1.19€ billion and connects 80 European and international research institutions as well as it is associated with some important North American and Japanese partners (a total of 112 partners). These signing of the framework partnership agreement (FPA) confirmed that the HBP will continue to receive funding – pending successful independent reviews and proposal evaluations – from the European Union research and innovation programme Horizon 2020. A lot of expectations and Confidence (faith? Or, at least, a “Bet”) into the future results lie at the justification of such a huge investment [17]. Anyhow, shortly after its beginning a long list of European neuroscientists raised several important theoretical concerts about this project [18].

  1. Blind emulation: without an existing “connectome”, the full process was not guided by precise hypothesis to be tested and checked. Without corrective loops hypotheses and experimental facts, the huge amounts of possible achieved data could not provide understanding. Was one of the first direct critics to big data methodologies in neurological debates, a very important issue [19].
  2. Overoptimistic and wrong model: the HBP should either a) abandon neurological research and be focused on technological advances, or either b) be split into one neurological project and another technological project.
  3. Too expensive and inequality for other fruitful researches: this project syphoned important amounts of funds for other fundamental research.
  4. Centralized management: the group was excessively big, and the coordination mechanisms were not very clear.

This collective wrote a critical letter “Open message to the European Commission concerning the Human Brain Project” July 7, 2014 (“Open Message to the European Commission Concerning the Human Brain Project Sign the Letter” 2014). 11 days later, the HBP provided an official EU answer, emphasizing the advances in ICT implementations as well as the necessity of several roadmaps for brain research. A Mediation Report was released in March 2015, in which most of critics were confirmed [20]. Anyhow, a new contract signed by HBP and EU in November 2015, forced to the reorganization about the new management structure, which ensured no single institution had overall control, after so many critics emerged everywhere [21]. If it is true that while the venture is generating knowledge about how to mathematically model some parts of the brain's circuitry, main critics say the simulation can do very little that is useful or helps us understand how the brain actual works. Curiously, at the same time USA engaged a new ambitious research project called BRAIN, in which many teams will compete for grants and lead innovation into different, unplanned directions guided by competition and peer review. This diversity is producing good advances without so many critics and possible research pitfalls. Obviously, peer review is not a perfect solution [22] nor is unbiased towards normal science [23].

From psychometrics to neuroimaging: the fMRI scandals demographics

Neurosciences experienced a fundamental change when classic psychometric technics were complemented and surpassed by neuroimaging techniques. Among the most widespread and successful techniques fMRI emerged as the golden standard into the field for 25 years. A few months ago, a study showed that statistical results obtained from fMRI studies could be heavily biased [24]. The authors demonstrated that the most common software packages for fMRI analysis (SPM, FSL, AFNI) could result in false-positive rates of up to 70%, far beyond the expected 5% false positives (for a significance threshold of 5%). Some methodological anomalies had been detected in 2009 [25], a study that obtained a IgNobel Prize, but that provided a serious alarm towards the biased way of doing research. At the end, it was a study conducted by a graduate student, who conducted an f.M.R.I. scan of a dead salmon and found neural activity in its brain when it was shown photographs of humans in social situations. We can add the low rates of serious reproducible results in psychological experiments in which neuroimaging techniques are involved [26] or some previous flaws detected by Logothetis and Ala� [27,28]. Here we are faced to a very important problem: the main research instrument of a research field is technologically flawed by several reasons, but not true alternative or fast solution is possible. Nevertheless, the community reaffirms its confidence into it, adding new ideas into the main debate, which includes specific epistemological ideas [29,30], most of them under review because of the several detected flaws.

Statistics vs. data science in neuroscientific research

There is a last important issue: the statistical processing of experimental data. Contemporary scientific researches are mainly statistically driven [31], managing bigger amounts of data than any other historical moment has previously experienced [19]. As a consequence of it, the old [32] and classic debates on statistical analysis are now at the centre of the epistemic storm, both for big as for small data analysis [33-35]. At the same time, new techniques, like automated robotic whole-cell patch-clamp electrophysiology of neurons in vivo [36] or dimensionality reduction [37], which adds a new class of machine learning algorithms-dimensionality reduction-for interpreting the neural activity, transforms radically the ways of doing neuroscientific research. Data mining is also showing that it is possible to mine rules from a subset of data and use them to complete the dataset informatically [38,39], because once the rules have been validated in similar but independently collected datasets, they can be used to predict similar behaviours. There is also a related problem: most of datasets are private and cannot be checked, make very difficult to verify or even replicate current experimental researches. Some attempts like “openfmri.org”, try to diminish this epistemological opacity. Some other, like “neurosynth.org” runs platforms for large-scale, automated synthesis of functional magnetic resonance imaging (fMRI) data. The implementation of e-Science methodologies in neuroscientific research is still at the beginning of a necessary process [40]. A final problem is related to this data analysis: for several processes neural mechanisms are not clearly ascribed to cognitive functionalities [41], such as dreaming, or placebo activation, among others.

End remarks on paradigm shifting or discipline evolution

Henry Markram, the leader of HBP, usually talks about the paradigm shift that is being produced by HBP researches [42]. Perhaps it is true, or perhaps we are observing a flagrant case of a self-fulfilling prophecy: with that budget and resources, surely, this project will change the future of neurosciences. But it is not so clear that by the exact reasons that Markram has in his mind. Surely, here we are seeing two different things: a) the introduction and evaluation of new research techniques, and b) the epistemological problems and challenges that arise from the use of intensive computational facilities [43]. Thus, we can confirm that several epistemological problems are present into contemporary neuroscientific researches, most of them the result of three facts: the belief into the epistemic power of this field, the huge investments made into the field, the novelty of the several involved research methods. As a result of it, we can observe this when we look at the lack of epistemological agreement on forward [44-46], or reverse [44, 47-49] inferences in neuroscience [24,50,51], as well as the role of images as evidential mechanisms [52], all of them still under intense academic debates and even have become material for neuroscientists training [53]. Bigger and bigger research groups [54] make also difficult the management of researches, as HBP or BRAIN project have demonstrated.

Although it could be considered as epistemological noise or epistemic confounders, all the detected problems (beliefs into model design reliability, statistical debates, computational debates) that we have observed are, at the end, the exemplification of a research field in constant and intensive evolution [55]. The main conclusion of this paper is that very active, innovative and socially impacting research fields like neurosciences are at the same time fostered by social beliefs into the soundness of their approaches and expected results. At the same time, the implementation of new techniques (in this case fMRI, computational environments, statistical instruments, AI methods) introduce a deep controversy about their epistemic evaluation process, which is held without experiencing a halt or stop into the researches. Therefore, the path to new and powerful knowledge is usually embraced or correlated with tensions, disturbances or ambiguities. Neutral linearity disappears from knowledge acquisition or evolution, but instead of it we achieve a richer knowledge of scientific practices. Because they fail under controlled and analysable conditions, they show to be the best way to become the foundamentation of human knowledge.

2021 Copyright OAT. All rights reserv

Editorial Information

Editor-in-Chief

Article Type

Research Article

Publication History

Received: July 07, 2018
Accepted: July 30, 2018
Published: August 07, 2018

Copyright

©2018 Vallverdu J. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation

Vallverdú J (2018) Biparieto-occipital variant of Alzheimer’s Dementia: Visual and praxis deficits. J Psychol Psychiatry 2: DOI: 10.15761/JPP.1000114

Corresponding author

Jordi Vallverdu, MD

Department of Philosophy, Room B7/104, 08193 Bellaterra (BCN), Catalonia, Spain; Tel: 935812964

No Data.