Google DeepMind has announced its second collaboration with the NHS, this time the company will use 1m anonymised eye scans to train a neural network to identify early signs of degenerative eye conditions. Additionally, anonymous related related information about eye condition and disease management will also be used. This collaboration with Moorfields Eye Hospital in east London will result in the production of a machine learning system which will be able to recognize sight-threatening conditions only just from a digital scan of the eye.
There’s so much at stake, particularly with diabetic retinopathy. If you have diabetes you’re 25 times more likely to go blind. If we can detect this, and get in there as early as possible, then 98% of the most severe visual loss might be prevented
Collaboration between both organisations came about thanks to the effort of an unsolicited request from Pearse Keane, a doctor at Moorfields. Keane, a consultant ophthalmologist, reached out to Google subsidiary via its website to discuss the need to better analyse scans of the eye; “I’d been reading about deep learning and the success that technology had had in image recognition,” Keane said when he came across the company’s article concerning it’s first public success about DeepMind training a machine to play Atari games.
I had the brainwave that deep learning could be really good at looking at the images of the eye. Optical Coherence Tomography is my area, and we have the largest depository of OCT images in the world. Within a couple of days I got in touch with Mustafa, and he replied.
Earlier on, DeepMind’s previous collaboration with the NHS had led to controversy, because they were accused of not having the proper authority to share the records of patients who would be involved in the trial. But since the Moorfields collaboration involves anonymised information, privacy issues wont be much of a problem.
As said earlier, the information shared amounts to about 1m anonymous digital eye scans, along with anonymous information about eye condition and disease management.
“This means it’s not possible to identify any individual patients from the scans. They’re also historic scans, meaning that while the results of our research may be used to improve future care, they won’t affect the care any patient receives today. The data used in this research is not personally identifiable. When research is working with such data, which is anonymous with no way for researchers to identify individual patients, explicit consent from patients for their data to be used in this way is not required.”
According to head of Moorfields’ ophthalmology research centre, Prof Peng Tee Khaw, the key to the collaboration was the huge increase in the volume of incredibly precise retinal scans available.
These scans are incredibly detailed, more detailed than any other scan of the body we do: we can see at the cellular level. But the problem for us is handling this amount of data.
“It takes me my whole life experience to follow one patient’s history. And yet patients rely on my experience to predict their future. If we could use machine assisted deep learning, we could be so much better at doing this, because then I could have the experience of 10,000 lifetimes.”
Earlier ongoing collaboration with the Royal Free hospital in north London, is focused on direct patient care, using a smartphone app called Streams to monitor kidney function of patients, this is the first time Google is embarking purely on medical research. Also, the Moorfields collaboration is the first time DeepMind has used machine learning in a healthcare project.
Training a neural network to do the assessment of eye scans could vastly increase both the speed and accuracy of diagnosis, this will save the sight of thousands. DeepMind researchers will use data gotten from anonymous eye scans to train an algorithm to better spot the early signs of eye conditions such as wet age-related macular degeneration and diabetic retinopathy.
Google DeepMind is a British artificial intelligence company founded in September 2010 as DeepMind Technologies. It was renamed when it was acquired by Google in 2014. The company has created a neural network that learns how to play video gamesin a fashion similar to that of humans, as well as a Neural Turing Machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain. The company made headlines in 2016 after its AlphaGo program beat a human professional Go player for the first time.
Blindness is the inability to see. The leading causes of chronic blindness include cataract, glaucoma, age-related macular degeneration, corneal opacities, diabetic retinopathy, trachoma, and eye conditions in children (e.g. caused by vitamin A deficiency).