BMC Neuroscience, 01 July 2011, Vol.12(Suppl 1), p.P131
The recent years have seen a rapid increase of interest in simultaneously analyzing the activity recorded from large numbers of channels in order to investigate the role of concerted neural activity in brain function. These efforts have led to advances in data analysis methods  that exploit the parallel properties of such data sets . However, an often neglected aspect is that massively parallel data streams place new demands on handling their complexity during all stages of the project : from the initial recording, throughout the analysis process, to the final publication. Three factors contribute these new demands: First, the sheer quantity of data complicates the organization of data sources, and the resulting automatization of analysis steps renders the validation of interim and final results difficult. Second, modern analysis methods often require intricate, multi-layered implementations, leading to sophisticated analysis toolchains . Third, a growing number of projects needs to be carried out in teams, within a laboratory or in collaborative efforts, requiring transparent workflows that guarantee smooth interaction. Taken together, the increase in complexity calls for a reevaluation of the ad-hoc traditional approaches to such projects. Can we derive general guiding principles that may be adopted for designs of efficient workflows? How could these improve our confidence in handling the data by providing better cross-validation of findings, reliably managing provenance data, and enabling tighter collaborative research, while at the same time leaving the scientist with the flexibility required for creative research?
View full text in DOAJ