Lab of the Future Congress: My Top 3 Takeaways
November 19, 2019
Last week, I had the pleasure of attending the Lab of the Future Congress at the Wellcome Center in Cambridge. It was great to connect with others who are really driving the future of drug discovery.
Back in the office but freshly energized from the meeting, I wanted to take a moment and share with you my main takeaways from last week’s discussions. We have been talking about these concepts for a while but I see tangible progress is being made on all of them.
1. Digitalisation is not simply moving to electronic means to store the same data (digitising)
To realise the lab of the future we need to consider what can we do differently because we have digital assets (this is the digitalisation step), including everything from how we run protocols to how we store results. We could simply just move everything over to electronic storage (paper to glass), but what advantage would this alone bring? Rather, digitalisation creates a much greater opportunity: we now have the chance to capture more complex data and integrate way more information. With full integration to robotics, we might monitor minute-to-minute experimental conditions; with appropriate structuring and annotation of historical data, we can leverage past results. In the lab of the future, all of this might be integrated to better inform future decisions. We need to keep reminding ourselves to ask ‘what is the scientific question we want to answer?’ to make sure we stay on the digitalisation path.
2. Connectivity and interoperability are key to enabling the Lab of the Future, and enterprise software will play a crucial role
Format wars have waged forever. In order to get a good outcome, we need to innovate, but we also need to do the basics well. For the Lab of the Future (LOTF) to work and enable us to ask the more complex and innovative questions, we need all components—automated robotic systems, request handling, data reporting, analysis and storage work flows—to be connected and interoperable in the widest sense. We all strive to define standards, but at last week’s conference I was especially interested to see in more concrete terms how creating these ontologies using semantic approaches moves us a long way forward.
Another crucial component is software that is broadly compatible with the wide range of existing and emerging technologies used for drug discovery. Such a software is a big but important investment for any research organization, so it must be stable and already have substantial, well-developed functionality. A good example of this is Genedata Screener®, which is compatible with a wide range of scientific instruments for many different technologies and has established a strong history of development.
3. People, processes and culture will also need to evolve to meet the Lab of the Future’s full potential
The Lab of the Future won’t replace people with robots and software, but it will require people, processes, and culture to evolve with the change. In the Lab of the Future, less time will be spent on routine experimentation, manual data compilation and analysis tasks. People will instead focus on process monitoring, intervening only in cases requiring more sophisticated judgement, and will interpret results with the aid of machine learning and AI methods.
Also, the value of a ‘negative’ result should not be overlooked: even though people often learn the most from unexpected results, they fail to record them due to time or effort to record them. By embracing the LOTF approach and capturing all results, we have the opportunity to examine and interpret a much richer set of data—provided, of course, that scientific culture evolves towards this type of thinking, involving more comprehensive collection and consideration of results. This will perhaps require some learning on our part, but in the long run, should ultimately allow us to better invest our energies and enhance our decision-making abilities.
Kevin Teburi is Managing Director of Genedata Ltd. (UK)