30/11/2022
Automated YouTube transcript
All right welcome to the Neuralink show and tell so we've got uh an amazing amount of new uh developments to share with you that I think are incredibly exciting as well as tell you about the future of what we're planning to do here.
It's uh now this is meant to be a technical podcast or sort of like a work…
I'm going to provide an overall summary and then we're going to have a number of members of the the neural link team come in and give a deep technical overview of the various areas. So uh yeah so let me move forward with the the overall summary.
Now some of the things I'm going to say are things you've well if you've been following your link you've already heard before that uh for for a lot of people out there they've no idea what Neuralink does and so I'll be a little bit repetitive of things you may already know but that others do not.
The overall the overarching goal of Neuralink is to create ultimately a whole brain interface, a generalized input output device that in you know in the long term literally could interface with uh every aspect of your brain and in the short term uh can ask we can interface with any given section of of your brain and and solve a tremendous number of things that that cause debilitating issues for people so uh you know so our long term is like I mean I'll talk a little bit about a long-term goal it's going to sound a little esoteric but it's the it was actually the sort of my Prime motivation which was you know kind of what what do we do about AI like what do we do about artificial general intelligence uh if we have digital super intelligence that's you know just much smarter than any human how do we mitigate that risk at a species level how do we mitigate that risk and then even in a benign scenario where the AI is uh very very benevolent then how do we even go wrong for the go along for the ride how do we participate and the conclusion I the the thing that the biggest limitation in going along for the ride and in aligning AI.
I think is the is the the bandwidth the the how quickly you can interact with the computer so we're we are all already cyborgs in a way in that your your phone and your computer are extensions of yourself and if you I'm sure you found like if you leave your phone behind uh you end up tapping your pockets and and it's like having missing limb syndrome like where you know the phone is it is leaving your phone behind is kind of like a missing limb at this point you're so used to interfacing with it you're so used to being a de facto cyborg um But but so what's the limitation on on a on a phone or a laptop limitation is the rate at which you can receive and send information especially the the speed with which you can send information so if you're interacting with a phone it's limited by the speed at which you can move your thumbs uh or the speed which you can talk into your phone this is an extremely low data rate you know maybe it's like 10 optimistically 100 bits per second but a computer can can communicate at you know gigabits terabits per second so this is the fundamental limitation that I think we need to address to mitigate the long-term risk of artificial intelligence and also just go along for the ride and uh yeah so if it likes it that's that's that's an esoteric explanation that I think will appeal to a niche audience um uh some of whom may be here um but and that's a that's a very difficult problem so even if we do not succeed with that problem I think we we like we are confident at this point that we will succeed at many uh it's it's solving many brain injury uh issues spine injury issues along the way so um yeah so anyways so uh actually we have uh Justin Roiland in the audience uh says the hi Justin so it's a little Rick and Morty reference here um the uh this great Rick and Morty episode about intelligence enhancement of your dog and uh what's the worst that can happen so anyway Rick and Morty I recommend it um so for so you want to be able to read the signals from the brain you want to be able to to write the signals uh that you want to be able to ultimately do that for the entire brain and then also extend that to communicating to the rest of your nervous system if there's a if you have a sort of a severed spinal cord or neck so uh now this is a this video is now 18 months old so this is um pager uh who is playing uh monkey mind pong so this is a pager has a neural link implant in this video um and the thing that's interesting is that you you can't you can't even see the the neural implant so it's the it's we've monitorized the neural implant to the point where it matches the the thickness of the skull that is removed so it's essentially that it's sort of like having an Apple Watch or a Fitbit uh replacing a piece of skull with like a you know a smart watch for lack of a better analogy um so uh so you can see you really can't he looks pretty easy normal um and I think that's pretty important if you have a neuralink device like I could have a neuralink device uh implanted right now and you wouldn't
you wouldn't even know I mean hypothetically yeah I may be one of these Demos in fact one of these demos I will so uh yeah anyway so so here's here's uh first of all it's kind of wild hey monkeys can play Pong they're like uh they can't actually pay pong if you give them a joystick uh so Pedro first learned to play Pong with a joystick so I'm like that was a novel it's like I didn't know monkeys could play Punk but they can um and then uh so we first trained pager to play Pong with a joystick then we took the joystick away and have the neural link and now this is he's playing telepath it's a telepathic video games essentially um so what we've been doing since then is uh we've been on the very difficult Journey from prototype to product uh and I've often said that prototypes are easy production is hard it's really I'd say a hundred to a thousand times harder to go from go from a prototype to a device that is safe reliable Works under a wide range of circumstances is Affordable and down at scale it's it's insanely difficult um I mean there's an old saying that you know that it's one percent inspiration 99 perspiration but I think it might be 99 or 99.9 perspiration um the best example I could give of an idea of being easy but the execution being hard is going to the Moon it's uh the idea of going to the Moon easy going to the Moon very hard so um and we've been working hard to uh be ready for our first human and obviously we want to be extremely careful and certain that that it will work well before putting a device in a human but we're we're submitted I think most of our paperwork to the FDA and we're we think probably in about six months we should be able to have opposed neural Link in a human so [Applause] but as I said we we do everything we possibly can to test the devices before uh not even not even going into a human before even going into uh an animal so we do Advantage top testing we do accelerator accelerated life testing we have a fake brain simulator that has the the texture and uh it's like emulating a brain but it's sort of rubber and uh so any we before we would even think of putting a device in an animal we we do everything we possibly can with rigorous bench up benchtop testing so we're not Cavalier and putting devices into animals uh we're extremely careful and uh we always want the device whenever we do the implant if it's in a sheep or a pig or a monkey to be confirmatory um not exploratory so that we like we've we've done everything we possibly can with benchtop testing and only then would we consider putting a device in an animal um and uh yeah we'll actually show you a demo later today of a few hours really of uh of implanting in a brain proxy and if anyone in the audience wants to volunteer uh with a robot right there so let's see since the page of demo we've expanded to work with a troop of six monkeys we've actually upgraded pager they do varied tasks and we do everything possible to ensure that things are stable and replicable and the things like that the device lasts for a long time without degradation so and uh what you're seeing there is it looks like the Matrix but that's uh actually though that's a real output of of neural signals so that that's that's not a simulation or just a screensaver or something that those are actual neurons firing that is one of the what one of the readouts looks like and um here you can see sake it's one of other monkeys typing on a keyboard now this is telepathic typing so to be clear this is the he's not actually using a keyboard he's moving the cursor with his mind uh to the highlighted key now technically um uh we can't can't actually spell and so I don't want to oversell this thing because that's uh that's the next version um so the but what's really cool here is is um sake the monkey is moving the mouse cursor using just his mind moving the cursor around to the highlighted key and then spelling out what we here what we want everyone just felt but um and then so this this is a something that could be used for somebody who's who's say uh uh quadriplegic or tetraplegic human even before we make the the spinal cord stuff work is being able to con uh control a mouse cursor control a phone um and we're we're confident that that someone who is has basically no other interface the outside world would be able to uh control their phone better than someone who has Working Hands so foreign upgradability upgradability is very important because our first production device will be much like an iPhone one and um I'm pretty sure you would not want an iPhone one stuck in your head if the iPhone 14 is available um so it's gonna be it's be able to demonstrate full reversibility and upgrade ability so you can remove a device and replace it with the latest version or if it stopped working for any reason replace it it's that's that that's a fundamental requirement for the device at your link and I should say both sucky and Paige were upgraded to our latest and greatest implants so that that's been really over a year and a half now that that pagers had for the first implant and then the upgraded implant so this is a very good sign that it lasts for a long time with no uh observed ill effects I think it's also important to show that um sake actually likes doing the demo um and it's not like strapped to the chair or anything so uh it's yeah so um the monkeys actually enjoy doing the demos because they and they get the banana smoothie and it's kind of a fun game so um I I guess smart termic is like We Care a great deal about animal welfare and um and uh I'm pretty sure like our monkeys are pretty happy you know so as you can see there's a quick decision maker on the fruit front so so for uh the the first two applications we're going to aim for in humans are restoring vision and uh I think this is like notable in that even if someone has never had Vision ever like they were born blind we believe they could they can we can still restore vision so uh because the visual part of the the visual part of the cortex is still still there so uh yeah even if they've never seen before we're confident that they could they could see um and then the uh the other application being in the motor cortex uh where we would initially enable someone who uh has no ability to almost no ability to operate their their muscles you know sort of like a sort of Stephen Hawking type situation and enable them to operate their phone faster than someone who has Working Hands um but then even obviously even better than that would be to bridge the connection um so uh take take the out the signals from the motor cortex and let's say somebody's got a broken neck then bridging those signals to neural link devices located in the spinal cord so I think we're confident there are no there are no physical limitations to enabling full body functionality so I mean as miraculous as it may sound we're confident that it is possible to restore full body functionality to someone who has a severed spinal cord so yeah [Applause] so yeah all right um and then I went to emphasize game that the primary purpose of this update is recruiting um a lot of times people think that they you know they couldn't really work at neurolink because they don't know anything about biology or how brains work and the thing that we really want to emphasize here is that you don't need to because when you break down the the skills that are needed to make neurolink work it's actually many of the same skills that are required to make a smart watch or uh modern phone work so it's sort of you know software batteries radios inductive charging um and uh you know as well as things that are specific to to us like animal care and clinical and Regulatory matters um obviously machine learning that phrase is used a lot but we also need to interpret the signals from the brain which is a biological neural net and the best thing to interpret a biological neural neural net is a digital neural net um so this is if there's one message I want to convey it is that if you have expertise in creating Advanced uh devices like watches and phones computers then your your capabilities would be of great use in solving these important problems that's that's that's one thing the message I want to convey uh so um let's see uh yeah so with that I guess DJ uh so so DJ's uh was on the founding team of neurolink and just made immense contributions to the company uh as of many of the others who will present but I want just to thank DJ for his immense contribution to uh neurolink and Frank all right cool thank you thanks Elon when I moved from South Korea at age 13 and needed to learn a new language to communicate I wonder whether there are better and more effective means of communicating my thoughts to the outside world and watching Neo learn Kung Fu and The Matrix I remember thinking wow I want to work on that work on making that possible and today I believe that this is attractable engineering challenge since everything about your intentions your thoughts and your experiences are all in your brain encoded as binary statistics of action potentials if you're able to put electrodes in the right places with the right sensing and stimulation capabilities this and many other applications that Elon talked about are possible and we can help a lot of people I'm incredibly excited to be working on this ambitious yet important mission to make that future a reality here at neurolink and I'm also incredibly honored to be working with some of the brilliant colleagues scientists and Engineers across many engineering disciplines to work on this intersection of biology and Technology you'll hear from several of them today to learn about the breadth of technical challenges challenges we face and our progress in the last year and I think you'll find that for most of these challenges as Elon mentioned you don't need a prior understanding of how the brain works and that a lot of what we do is applying engineering first principles to biology so how do you create a high bandwidth generalized interface to the brain from day one we focus on a set of foundational technologies that are safe scalable and capable of accessing all areas of the brain these three axes safety scalability and access to brain regions really form the basis for how we engineer products here at neurolink
safety because we want to make our devices as well as the installation as safe as possible so that we can drive the adoption of this technology and scalability because as we make our devices safer and more useful more people will want it and with scale we also want to make it more affordable and access to brain regions so that we can expand the functionalities of our Technologies so our first steps along these dimensions for our device is what we call the N1 implant it's a size of about a quarter and it has over 1 000 channels that are capable of recording and stimulating it's uh microfabricated on a flexible thin film arrays that we call threads it's fully implantable and wireless so no wires and after the surgery uh the implant is under the skin and it is invisible it also has a battery that you can charge wirelessly and you can use it at home so similarly for implanting our device safely into the brain we built a surgical robot that we call the R1 robot it's capable of maneuvering these tiny threads they're only on the order of few red blood cells wide and inserting them reliably into a moving brain while avoiding vascular germs it's it's quite good at doing this reliably and in fact because we've never shown an end-to-end insertion of a robot in action we're going to do a live demo of the robot doing surgery in our brain proxy so who wants to see some insertions so here it is that's our R1 robot with our patient Alpha who is lying comfortably on the patient bed uh this is what we call the targeting view so what you're seeing is this is a picture of our uh brain proxy and the pink represents the cortical surface that we want to insert our electrodes into and the black represents the vascular Shores that we want to avoid and what you're seeing is these hash mark with numbers that represents where we intend to put each of our threads so should we see some insertions so this is another view real quick on the left is the view of the insertion area and on the right uh what the robot's going to do is it's going to peel the array uh the threads one by one from a silicon backing and insert it into the targets that we predetermined in the targeting View so there you go that's the first insertion [Applause] so we're going to see a couple more insertions the whole process of inserting uh about 64 threads in our first product is going to be around 15 minutes for this robot so yeah there's a second one that went in and we're going to do a third one there you go and then that's going to go in the background and we'll come back to it in the later part of the presentation. [Music] [Applause] and as Elon mentioned we've been working very hard to go from prototype to Building Product as part of this one of the things that we did is to move our device manufacturing to a dedicated facility in Austin for scale-up manufacturing and what's important to highlight and is evident in this clip is that it's very typical for us to have our Engineers who design also work on the physical manufacturing line to build and debug and this has been extremely extremely critical in reducing our iteration cycle time and we've also scaled up our surgery so we now have a dedicated our own or in fact a double or in Austin and this is just a stepping stone before we um eventually build our own neuraling Clinic so with this product N1 and R1 our initial goal is to help people with paralysis from complete spinal cord injury regain their digital Freedom by enabling them to use their devices as good as if not better than they could before the injury and as Elon mentioned over the last year this has been the central focus of the company and we've been working very closely with the FDA to get approval and to launch our first and human clinical trial in the U.S hopefully in the six uh in next six months so hopefully this gives you a good overview of our product for the next hour we're going to go through a deep technical Dives on these topics to tell you about our technical challenges share some of our progress and preview what's coming next so with that over to near from my team who's going to talk to you about neural decoding thank you and everyone my name is brand interfaces applications our goal is to enable someone with policies control a computer as well as me or even better would like to provide fast and accurate control with all the functionality of computers that works anytime anywhere so I'm very excited to show you how we are using the N1 device with our software and algorithms to achieve this last year we shared with you a video of page of the monkey controlling computer cursor with his brain so how do we do that just a brief reminder first we record is a neural activity from the motor cortex using the N1 device we have we can record from over thousands of channels while he's playing with the joystick then we can train a neural net that predicts the cursor velocity from the patterns of his neural activity with this decoder he can then control a cursor just by thinking about it without even moving the joystick you can play with this decoder a variety of games also a grid task whereas moving the white dot towards the yellow targets every time he gets one he receive a drop of his favorite smoothie and he chooses to play this game every day here you can see his performance from early 2021 around the time we released the previous demo it's quite accurate but it's a bit slower than what we would like and cursing control is the foundation for interacting with most Computer Applications so since then we've been working to improve cursor speed and accuracy as you can see it's much much faster almost twice as fast [Music] however it it's still still a bit slower than what I can do so we are working on creative ways to improve that now speed is not enough you want the full set of functionalities and for decades most software was built for mouse and keyboard control and it doesn't make sense to reinvent this entire ecosystem for brain control at least for now so we are working and we are designing a mouse and keyboard interfaces for the brain the way we do that is by training Pages Pedro and his friends on a variety of computer tasks and then designing algorithm to predict the behavior here we can see a few example of tasks in different phases of monkey training for example left and right click click and drag cursor typing swipe typing handwriting and even hand gestures now interacting with computer is bi-directional and feedback is very important I like when I click on a button and I can physically fill the button being pressed when a potential N1 user will attempt to click they won't be able to fill it an example of how we are addressing that is by providing a real-time visual feedback that represents the strength of the mural Click by changing the color of the cursor just by typing on a physical keyboard is much faster and easier than typing on an iPad keyboard this will make the band control much faster and easier to use typing one of the most important functionalities so you already sent this message and I want to show you the behind the scene of how this message was created and here you can see again sake using the virtual keyboard tapping this message this virtual keyboard is similar to the one I use on my phone and with the speed and accuracy that we achieved so far typing on a virtual keyboard is already fast and easy however I never use a virtual keyboard when I type on my keyboard on my computer because it covers my screen and it's also much slower than what I can do with my 10 fingers we can do better for example a group from Stanford ask a person to imagine handwriting I had an imagine headlighting letters then they decoded the letters from Israel activity using this approach they were able to speed up the typing the typing rates we start this project with our monkeys but of course they don't know how to write so to mimic writing we train Angela one of our favorite monkeys to trace digits on an iPad here you can see him tracing the digit 5 and the digit 2. then we recorded his neural activity with the N1 device but now instead of decoding the cursor velocity we decode in real time the digit that he's tracing on the screen we had two main takeaways from this project one that monkeys are awesome and can learn very very complex tasks the second one that although it can increase the typing rate it requires hundreds of examples and samples of each of the digits and the characters we wanted to classify this would not scale the way we are solving that is by interaction instead of decoding directly the digits we first decode the M trajectory of this on the screen and then when we decoded the head trajectory we can use any of the Shelf handwriting classifier to predict the digits and the characters for example classifiers that are trained on an MS data set why it's so important it's important because now we can potentially decode any character in any language with only one neural decoder for hand trajectory it means that you can write in English Hebrew Mandarin or even monkey language and we can understand you wanted a banana so there are many challenges ahead of us to improve functionality and speed and I want to hand it off to Bliss to talk about the third Parts how we are making our brain interfaces work anytime anywhere laughs hello everyone my name is bliss and I'm a software engineer here at neurolink when I use my computer my mouse and keyboard where can I intend them to at least like 99.9999 of the time my goal is to enable a user with paralysis to control their computer as reliably as I can here's what we want that experience to feel like in this video you can see saki walking over to its MacBook and choosing to work on his typing task the entire decoding system works out of the box and it feels totally Plug and Play the first step to achieving this kind of high reliability is to test extensively offline a typical flow for using the N1 link is to connect over Bluetooth stream out neural activity from the brain and then use that neural activity to train decoders and do real-time inference we've built a simulation for exactly the sequence but instead of using a monkey with an implant we use a simulated brain that injects synthetic neural activity into an implant sitting in a server rack from the point of view of that implant it's in a real brain this stimulation runs on every code commit to validate that from the hardware all the way up through to the neural decoders our entire stack can achieve state-of-the-art performance however while this kind of simulation is great for integration testing of software and Hardware it's not yet detailed enough to guarantee High reliability in the real world in the real world the underlying signals we're trying to decode actually change day to day in this plot you can see the average firing rate detected on a representative channel of sake's implant each bar represents one day and you can see that each day has a different average firing rate than the previous this presents us with a very interesting problem for how to make our decoders robust day to day it can actually happen that if you train a neural decoder on one day of data and then try to use it on the next the average fire rates can actually shift enough to cause a bias in the output of
the model here on the right you can see that this bias is making it hard for the cursor to move to the upper right corner you see it struggling here to make it up to the upper right and then it moves much more effortlessly down to the bottom left we're trying many approaches to mitigate this problem some examples include building models on large data sets of many days of data to try to find patterns of neural activity that are stable across days another approach we're trying is to continuously sample statistics of neural activity on the implant and use the latest estimates to pre-process the data before feeding into the model this is really an active area of research for the team and it's a critical problem to solve if we want to enable someone with paralysis to control their computer as well as I can another big problem we have is to minimize the time it takes for a spike in the brain to impact the movement of the cursor on the screen if you have lag or Jitter in this control Loop the cursor becomes hard to control leading to the kinds of overshoots that you can see here on the right I don't know one big Improvement we've made towards uh in this direction is called phase lock phase lock aligns the edge of each packet that we sent off the implant to the exact moment that the Bluetooth radio is going to wake up this minimizes the time it takes for a spike in the brain to be incorporated into the prediction of our neural network here you can see the latency distribution after phase lock not only has the mean been greatly reduced but the variance has been reduced as well this makes it easier for the user to predict the behavior of their cursor over the last year we've made tremendous improvements to the stability and reliability of our system and we've been able to demonstrate consistent high performance across many sessions and many months however there's still a long road ahead of us before the system will truly feel Plug and Play so if solving the hard problems required to ship this technology is exciting to you you should consider applying to join the team now I'm going to hand it over to Avinash to talk about how our custom low power Asic detects spikes in the brain [Applause] hi I'm Avinash one of the engineers on the Asic team we designed the custom neural sensors which include both analog and digital circuitry to record and stimulate across 1024 independent channels we Face challenges across all three major metrics performance power and area not only do we have to fit all 1024 channels into a single quarter sized implant but we also have to measure spiking activity less than 20 microvolts in amplitude and today I'd like to focus on the last challenge I mentioned power consumption is important to us because we want to give future users a full day of use of their implant without any Interruption for charging back in 2018 we were sending every sample from every channel off the device for processing which burned a ton of power in 2020 we brought Spike detection onto the chip as you may know neurons transmit information by firing so simply monitoring for these spikes and only sending these Spike Events off the implant acts as a very efficient form of compression and over the past two years we've continued to make optimizations within the Asic dropping the total system power consumption down to just 32 milliwatts and doubling battery life let's take a look at our on-chip Spike detection algorithm which makes our battery-powered implants possible we first start by applying a 500 Hertz to 5 kilohertz bandpass filter to remove noise that's out of band next we use an estimate of the noise floor to generate an Adaptive threshold per Channel and finally our Spike detector module identifies three key points of a spike identifying three points allows us to detect not just the presence of a spike but the shape of a Spike as well this can be extremely important for distinguishing between multiple neurons adjacent to a single Channel today I'd like to focus on one of the many optimizations that we've made in our latest chip this one specifically cutting system Power by 15 percent note that neurons Spike relatively infrequently which means that our Spike detector spends a lot of time searching for the first point of a spike and very little time searching for the other two points of a spike that only occur after the threshold is crossed we can use this characteristic of the input waveform to reduce memory accesses within the chip by 30 percent let's take a look at how that works our Spike detector is implemented as a single functional unit that's shared across all channels with an SRAM to buffer the state of each Channel as a sample comes in its Channel state is read from SRAM an incremental Spike detection step is run and then the updated state is written back to SRAM since this is happening 20 million times per second across the implant each of these accesses add up quite quickly in our latest chip we split the state into two parts a hot State and a cold state the hot state is accessed on every cycle while the cold state is only accessed once the threshold is crossed reducing the average axis width and saving power we're also working on a Next Generation stimulation focused chip with 4096 channels still within the footprint of our current chips in addition to increasing the channel count we're also increasing the drive voltage so we can get better activation per Channel and to support this higher Channel count as well as a broad range of future applications that you'll soon hear about we're adding an arm core onto the chip and finally since these chips are the same size as our current chips we can still put four of them together into a single implant for a total of 16 000 channels still within the size of a quarter [Applause] very hard to improve the power consumption within the implant but we've also been working very hard to improve the charging experience of the implant which Matt will talk about but first the robot has just completed inserting all 64 threads so let's take a look thank you this is a view of the insertion site similar to the one that DJ showed you earlier but instead of the targeting reticles if you look closely you can see that all 64 threads each carrying 16 electrodes have been inserted into the brain proxy while avoiding vasculature and all just within the past 20 minutes let's hand it over to Matt now to continue the technical deep dive foreign head of brain interfaces electrical engineering our fully implantable N1 device depends on a battery for continuous operation when that battery is running low charging is accomplished through Wireless power transfer however unlike many consumer electronic devices which can simply offer a physical connector charging a fully implantable device poses several unique challenges first the system must operate over a wide charging volume without relying on magnets for perfect alignment the system must be robust to disturbance and complete quickly so as not to be overly burdensome however most important is safety in contact with brain tissue the outer surface of the implant must not rise more than two degrees C in pursuit of these goals our charging system has gone through several engineering iterations the first if you watched our Pig demo in August of 2020 Gertrude was implanted with a version of the N1 charged with our first generation charger this device was implemented in a small Puck package and later separated into a remote coil and battery base this charger was challenging to use however we learned a lot through its implementation our current production charger which charges our current generation of implants is implemented in an aluminum battery base which also includes the drive circuitry a remote coil four times the size of our original device also disconnectable this uh this remote coil has increased switching frequency driving improved coil coupling this charger is in use today including several applications within our engineering and animal Test Facilities I'd like to show you one of these applications here with a device we call our simple charger and the coil has been embedded into the habitat with the addition of one new outer control Loop plus a banana smoothie pump The Troop has been trained to charge themselves so let's see how pager charges his implant on the right we're streaming real-time diagnostics from pagers N1 when he climbs up and sits below the coil you can see the charger automatically detects his presence and transition from searching to charging we see the regulated power output on a scale of zero to one and the current driven into his battery I mentioned earlier that we improved the coil coupling however the high quality Factor coils exhibit good charging performance over relatively larger distances but as they're brought uh closer to the implant what you see is a peak splitting effect where the the best highest efficiency power transfer is pushed up into higher frequencies outside of the ism band required for compliance with regulated radiated emissions in our next Generation charger we address this problem by the introduction of dynamic tuning shown on the right this allows us to in real time adjust the resonant frequency of the transmit and receive coils so that we can change their properties just ahead of degraded performance the electrical engineering team is currently engaged in developing a third generation charger notable improvements include bi-directional near field communication this has allowed us to reduce the control latency and improve the thermal regulation improve thermal regulation results in Faster charge times and now Julian will tell us about how we test the N1 thank you very much Matt my name is Julian and I lead the embedded software group on the brain interfaces team so when we started building implants we had a small manufacturing line and to collect data from an implant you would manually walk over with your laptop you would connect and collect the data of Interest but our goal is to make an ultra safe and Ultra reliable implant and so to do this we scaled up the manufacturing line now testing throughput and data collection capabilities so firstly we added a large suite of acceptance tests to the manufacturing line these test the functionality of each component and the final assembly implants coming off the line are then subjected to bench top testing accelerated Lifetime and animal models we then collect data from these implants around the clock this data is processed by a series of cloud workers and displayed in an aggregate manner and then finally all of this information feeds back into our design process and empowers our Engineers to answer any question about any implant at any time I'm now going to walk you through different parts of this infrastructure starting off with firmware testing so the implant contains a small microprocessor running firmware to manage a whole bunch of its operations and before we release a firmware update we want to rigorously test it with both unit and hot around the loop tests also known as hilltests so to do a hill test what you do is you instrument the battery you instrument the power rails the microprocessor and then we connect to each device with a Bluetooth client and then we walk the devices through various scenarios to test things like power consumption real-time performance security systems fault recovery mechanisms a lot of different things in our original implementation of these systems we used off-the-shelf components to start automating tests quickly however these systems were constructed in a relatively autism fashion and were very difficult to maintain and this meant that testing quickly became the bottleneck for development so to alleviate this the hardware and software teams developed a new system which integrates all the required components onto a single baseboard we can then put the charger and implant Hardware on individual modules that plug into this baseboard including one board with opposing coils so that we can test charging performance this architecture allows us to rapidly iterate different Hardware prototypes because we can simply drop them into this system and reuse all the testing infrastructure additionally we can host the current and next generation of our mural Asics onto fpgas and plug those into this board as well and that allows us to test a whole extra layer altogether so that's how we generated this rather inceptive image here on the right what you're looking at is spiking activity emitted from some of our simulated neural sensors streamed through the entire system over Bluetooth and then displayed on a phone this allows us to test everything in one system from Chip to Cloud this system is one-fifth the cost one-fifth the volume and is very easy to manufacture this allows every developer to have a personal unit on their desk and it also allows us to test to shot the entire test Suite over a large number of these units mounted into a rack all of this has greatly accelerated our rate of development let's look next at how we monitor the implants Electronics the battery and the enclosure so the implant will periodically capture all of its Vital Signs and commit those to flash and then upon next connection with one of our recording stations it will stream that data off so for instance if we look at humidity we can get an understanding of the Integrity of the implant's enclosure and by looking at battery voltage and power measurements we can gauge battery health all of this is done automatically without any intervention giving us 24 7 visibility into the quality of every single device additionally we can use this infrastructure to request High Fidelity information on demand so that we can investigate different anomalous situations so for instance in this particular scenario we were trying to track down the source of some spurious spikes that we were observing on different channels and so we requested roll wave samples directly from those channels capturing good quality neural signals requires intact low impedance electrodes and so this is also something we monitor very closely with dedicated circuitry on the neural sensor so how do we do this we do this by first using an onboard DAC to play a test tone on a single Channel and then we record using our adcs simultaneously we record the response signal on both that that channel and physically adjacent channels not only can we measure the impedance of every channel with this but we can also map different physical phenomena to different characteristic signatures so for instance an open channel will appear as a very large response on the channel and shorter channels will appear as a large response on neighboring channels by looking at the purity of the signal coming back we can also validate that the analog front end of the neural sensor itself is operational in our original implementation of doing these impedance scans it took four hours to get through all 1000 channels but by paralyzing the tests down sampling filtering and then reducing the amount of information we have to stream off the device by moving a lot of the calculation to the firmware side we're now able to scan all 1000 channels in just 20 seconds this means that we can run impedance on every implant every day and then our internal dashboards can play back a history of this impedance so that we can get a really good quantitative insight into that interface between biology and electronics now that you have an idea about how we test and monitor our implants I'm going to hand it off to Josh who's going to tell you about how we get feedback even faster by accelerating our implants to failure thank you hello my name is Joshua Hess and I'm an engineer on the brain interfaces team we are responsible for the implant system design as well as many of the manufacturing and testing tools Julian just talked to you a little bit about some of the ways in which we test our implant Electronics hardware and software but what about the entire system as it relates to longevity in tissue one of the ways we've addressed this is with the development of our in-house accelerated lifetime testing system the system allows us to expedite and capture long duration implant failure modes at scale to rapidly increase our pace of iteration even better the system also significantly reduces the amount of tests which require animal models both for Implant prototypes and of course longevity testing so how does the system work on a very basic level it comes down to three things first we want to mimic the internal chemistry of tissue next we want to accelerate these chemical interactions as well as diffusion with our implant materials and finally we want to aggressively cycle the internal Electronics of our implant with these things primarily the first two we have achieved a conservative 4X
acceleration Factor by the erroneous relationship in other words every day our implants spend in our accelerated system is equivalent to at least four days spent in Vivo historically one of our greatest challenges has been the battle against moisture Ingress into our implants so we continuously monitor the internal humidity to watch for abnormal rise here in white you can see some internal humidity data from implants in some of our animals for the duration of over one year as you can see our internal humidity sensing is so sensitive it can even attack to the very small and slow humidity rise just from diffusion through our implant materials now in blue you can see that same internal humidity data but from devices in our accelerated system now if we adjust this data for our acceleration Factor you can begin to see not only the agreement in this data but also just how far into the future the data extends now in red you can see a device which has failed in our accelerated system this device showed an abnormal increase in humidity over the duration of many months before implant electronic failures occurred so how do we build the system well we started building the first system prototype just after the kova shutdown had begun in early 2020. so we had to get a little creative as you can see our first system prototype was a little Scrappy and operated out of one of our Apartments as indicated by the carpeting although Scrappy the system allowed us the fastest path to start testing our devices tuning Our working fluid chemistry and checking our constraints we also immediately started root causing observed failures in early implant prototypes fed that information into the next prototype designs and literally rinsed and repeated over the duration of just a few months the system was built out totally custom and highly iterated with two system versions and countless minder iterations leading us to our currently operated third generation system which achieves high density testing with automatic in vessel charging as well as automatic data collection the system also features an implant sled assembly which accepts brain proxy material such that the implant can be installed and inserted by the surgical robot just like you saw a few minutes ago we also integrated the system into a high density rack mount form factor along with the centralized fluid management system both for chemical uniformity across vessels and also reduced operational maintenance the system has been in operation for the last year and a half and has had its fair share of challenges since the system itself is undergoing the same accelerated abuse as the implants within it it has been extremely challenging to design build and maintain a system of the scale while keeping it robust even against itself so what comes next well we've started work on our fourth generation system and have totally redesigned it from the ground up to be a hot swappable single implant per vessel design partly inspired by high density compute servers with this new system we will achieve a whole new level of density robustness and scale we also intend to have many of these systems operational in the pursuit of capturing even the lowest frequency Edge case failure modes with this we will have thousands of implants testing in pursuit of these goals we've already started work building out the system but there is still a lot left to do there are also many exciting challenges ahead of us such as introducing mechanical stressing brain proxy micro motion and Ethan replicating tissue growth around the threads for more complete and representative accelerated testing so now that you've heard some of the ways in which we rigorously test our implant designs before production for surgery Christine is now going to take you through a detailed look at our surgical process thanks Josh hi everyone I'm Christine leader of the surgery engineering team to get an N1 device it's essentially these steps targeting and the incision drill the craniectomy remove the tough outer meningeal layer called the dura then insert the thin flexible threads of electrodes place the implant into the hole we created and then that's it you've got an implant Under the Skin look ma no wires just kidding uh I mean seriously no wires but I don't actually have one the surgical robot does the thread insertion part of the surgery this is because it would be very difficult to do manually imagine taking a hair from your head and trying to stick it into a Jello covered by Saran Wrap and doing this at a precise depth and position and doing this 64 times within a reasonable amount of time and a neurosurgeon would probably not like it very much if we asked them to do this for the surgery so we have the robot that you saw doing its tiny dance I sort of wanted to call it Tiny Dancer but it's called R1 which is also great the rest of the surgery is done by the neurosurgeon in order for us to make a accessible and affordable procedure we need to revisit this I'll tell you why when I was in school my dad lost the ability to walk and to use his arms and even to speak he was diagnosed with ALS we would look on the internet and you could see maybe one person here or there who had some cool custom robotic assistive device but it was deeply frustrating how limited were the options available to him and there's hundreds of thousands of people with paresis not even counting people with other conditions that our device might be able to help meanwhile there's not that many neurosurgeons maybe about 10 per million people and it takes about a decade or more to train a neurosurgeon and they're already generally very busy and as you can imagine the time is very expensive so in order for us to do the most good and have an affordable and accessible procedure we need to figure out how one neurosurgeon could oversee many procedures at the same time this might sound sort of crazy but probably so did laser eye surgery before Lasik made it normal lasik's been around for about 30 years and counting in the beginning the laser robot did just the most fundamental core part that it had to do and the surgeon did the rest and over the iterations the surgeon has to do less and less and the laser robot does most of it and it's a highly compelling procedure it takes just a handful of minutes and often gives life-changing results since I joined in 2017 we've also done a handful of iterations to optimize the threat insertions of the robot one of the challenges that we've had to face has to do with the optim mechanical Packaging so as you can see here there's about three primary Optical paths that are really valuable for a staff reliable threat insertions one is the visible Imaging of the needle inserting a thread and then another is the laser interferometry system called OCT Optical coherence tomography that gives us the precise position of the brain while it's moving in real time and then also we have to provide lighting and illumination to see what's going on in the visible visible light camera and doing all this where the needle is at the bottom of the craniectomy especially when it's close to the skull wall can be pretty difficult to fit everything and be able to see it so the way that the team solved this is by putting all three of these Optical paths into one Optical stack using Photon magic or polarization whatever you want to call it and that enables us to do vessel avoidance in real time so as I mentioned the brain is moving and where we place Targets in the beginning may not be where you want to insert at the moment the needle is going down there so the robot can actually detect the vessels and then determine if we're going to insert onto a vessel or not if it's safe to insert and then that way we can avoid inserting onto major vessels and that brings us to the robot that we have here today there's still a lot for us to do to get to that procedure where we reduce the role of the neurosurgeon and make it affordable and accessible the primary the two elements of the surgery that demand the most skills from the neurosurgeon are the craniectomy and the directomy Alex and Sam are going to tell you a bit more about how we think we can get rid of the directomy step so that leaves the craniectomy in neurosurgery if your craniectomy is small enough you can use a standard tool called a perforator which makes quick work of this job but for a larger craniectomy the surgeon has to rely on their skill in order to accommodate the variability Patient to Patient in skull thickness skull hardness even within the same patient in the same craniectomy you can have different skull thicknesses for example in addition if we can make something that has a very high Precision craniectomy we can open the design space for future ways of mounting the implant to the skull so I'll show you a few of our prototypes ultrasonic Cutters like what's on the screen and oscillating Cutters have the benefit of not cutting soft tissue you can cut the bone and not the brain but however as you can see here our ultrasonic cutter prototype created quite a bit of heat to cut at the rate that we wanted so onto the oscillating saw here we designed a blade to minimize cut time and also conducted sound and also Heating and as you can see you can cut through hard things like bone but not soft things like skin it's simple and it's it works however if you wanted to cut an arbitrary depth or arbitrary shape the oscillating saw just won't cut it I was afraid no one would get it if you guys are smart so there's a time-tested solution for uh drilling arbitrary shapes which is a CNC drill the challenge with us doing this on a person is that we need to make sure it cuts reliably every single time doesn't cut too deep and a few ways that we're using feedback to you know make sure we don't cut through the brain or force feedback and also impedance and if I could get a volunteer just kidding maybe next time um but yeah so this is a some insight into some of the things we're working on to make an accessible and affordable procedure and now Alex is going to tell you a bit about our next Generation developments [Applause] thanks Christine I'm Alex I'm a mechanical engineer here on the robotics team now that we've covered the technology and surgical process for a current device we'd like to cover some of our next Generation development projects I and the next couple speakers would like to talk about one of those projects which is enabling device upgradeability you've gotten to hear about the advancements we've made over the past year we've improved implant robustness battery and charging performance Bluetooth usability realistically every new device version is going to be significantly better it'll be more functional it'll last longer we need to keep this new technology accessible for our early adopters this means that we need a solution to make device upgrade or replacement just as easy as it is to initially install as many medical device companies have found this is a challenging problem the body's healing response doesn't make this easy so this isn't solved yet but we've made significant progress towards enabling this that we'd like to cover today now we'll have to start with some background as to what makes device upgrade challenging and we'll start with the anatomy Under the Skin you have the skull below that the dura a tough membrane that separates the bone from the brain and between the dura and the Brain you have the pier arachnoid complex a fluid-filled suspension for the brain to install the device the surgeon removes a disc of skull and Dura to expose the brain surface the device then replaces the removed material the challenge is here at this interface over months all empty volume is filled by tissue encapsulating the device and the threads the device would be trivially easy to remove because of the thread's small size they would slip right out of the brain it's the tissue layer that forms above the surface that makes a removal challenging we built tools in-house to study this response and characterize it such as histology and micro CT in these images you can see that layer of tissue that has formed above the surface encapsulating the threads and adhering to the surrounding tissue we've explored many different avenues for Designing around the ceiling process and finding a solution to make device upgrade seamless our best successes have come from making the procedure less invasive instead of directly exposing the brain's surface we instead keep the dura in place maintaining the body's natural protective barrier this prevents encapsulation of the brain's surface and really this is actually a huge win for making the surgery simpler and safer as Christine alluded to however this doesn't come for free the dura is a very tough opaque membrane as you can see in these sem images it's composed of a dense network of collagen fibers these offer an array of technical challenges for inserting our electrodes one of those challenges is Imaging through the dura as you can see on the left our current custom Optical systems offer pretty incredible capabilities for Imaging the exposed brain surface however as you can see on the right once the dirt is in place you can't see the dense vasculature at the brain surface the dirt is in the way there's simply too much attenuation to solve this problem we're developing a new Optical system that uses medical standard fluorescent dye to image vessels underneath the tissue here you can see that die perfusing through the vessels highlighting them there's still a lot of work engineering work to go to prove accuracy and repeatability of this system but once that's done this will allow us to Target and avoid blood vessels underneath the dura we're also exploring applying our laser Imaging system to deeper tissue structures in the bottom left you can see a section of the tissue layers underneath the dura this image is compiled from multiple volumes from our Optical coherence demography system you can see the collage of those volumes above in the future these new systems when combined with correlation to pre-op Imaging such as MRI will enable precise targeting without directly exposing the brain surface now Imaging isn't the only challenge that comes with the tough Dural Anatomy now I'd like to hand it over to Sam to talk about some of the challenges of inserting our electrodes through this membrane thanks Alex hey I'm Sam and I lead the needle manufacturing and design team so as Alex mentioned the same properties of the dura that make it a good protector of the brain also make it really difficult for us to insert the threads into in humans the dirt can be over a millimeter in thickness which doesn't sound like a lot but compared to our 40 Micron needles it actually is a lot for example if you've scaled up the needles to the size of a pencil the dura would scale to over four inches in thickness take a look at how far you have to zoom in to even see it by the time the features of the needle come into frame you can see individual red blood cells in the same frame this is this is just wait this is a real-life sem image of our latest design on the left there you can see the end of the thread in the middle is the needle and on the light is actually a piece of my hair so yeah it's extremely small and besides being really small there's a lot of other challenges associated with designing this um One Challenge is that we have to use the needle and the protective cannula that it sits in to grab onto the thread and to hold it while we peel it from this protective silicon backing and then we have to keep holding it while we bring it over to the surface and then release it from the cannula during insertions another challenge is that the brain is really soft beneath the tough Dura and so if the needle isn't sharp enough it'll just keep dimpling the surface without puncturing and if this free length gets too long it can actually just Buckle the needle like this is that we don't just have to get the needle through we have to get the thread through as well so we really have to focus on optimizing the combined profile of the needle and thread together these are just some of the challenges associated with designing something like this before we found that the key s problem has been improving on our speed of iteration but let's look at how we make these things in the first place so we start with a length of 40 Micron wire made out of tungsten and alloyed with a little bit of rhenium for added ductility we designed this femto second laser Malin house to cut the features of the needle and cannula and it can do this with sub Micron precision we spent a lot of time this year turning this thing from a science project into an industrial system just a couple months ago it took a skilled operator 22 minutes to make a needle and even a skilled operator could only get about 58 yield today that same process takes just six minutes and anyone can get 91 yield with just a few minutes of training with only one click the mill cuts and measures the needle in cannula and uploads the measurements to our limb system so that the robots can use the exact dimensions for each needle that it uses now this is all for a current design though and we've had a couple years to optimize the manufacturing process of it the current design has served us well so far but it doesn't quite protect the thread well enough to get through the tough Dura so like I said we had to come up with something new and we needed to be able to iterate on designs quickly unsurprisingly there's no page in machinery's handbook for this kind of thing so we dug into the science of femtosecond laser ablation and figured out a workflow that allows us to use our laser Mount much like a CNC mill this allows us to iterate several times this allows us to iterate in under an hour for new designs allowing several iterations per day when we're really on a roll as a result the latest design seen on the right can actually insert through nine layers of direct a totaling uh three millimeters on the bench top this is far more than we could ever expect in a human with significant margin [Applause] the needle isn't the only part of the puzzle though as you can imagine all these designs here work with different threads so we need a way to iterate on that as well and we do this by having our microfabrication process here in-house this summer we completely rebuilt our clean room in about nine weeks which among other things greatly reduced particulate counts which allows yield and throughput to greatly increase this combined with all the other great improvements the microfib team has made allows us to iterate on new designs in just a matter of days the last piece of the puzzle though is testing we can come up with as many new designs as we want but unless we have a way to actually test them in the right conditions we won't know what to tweak or even worse we'll spend time optimizing for the wrong things take this failure mode for example um a few months ago we got to the point where we could pretty reliably insert through the dura but when we took the proxies and put them in our micro CT Imaging we realized that our hold on the end of the threads was actually too strong and we were pulling them out just a little bit underneath the surface by the time we solved the problem we realized that this issue was very sensitive to the properties of the surrounding material or tissue we could make a proxy where this never happens and we can make another proxy where this happened every single time and this highlights why it's crucial that we spend time making our benchtop tests Mash tissue as accurately as possible I'm going to pass it off to Leslie now who's going to talk about how we've been doing that thanks hi I'm Leslie and I lead microfabrication r d and part of what we're interested in is understanding the biological environment our implant and threads experience once they're fully installed in the body learning directly from biology though is inherently slow so in order to move fast we're developing synthetic materials that mimic the biological environment this allows us to learn as much as we can on benchtop and start taking steps away from the industry standard of animal testing developing accurate proxies though is challenging the implant environment is made up of many anatomical layers that all have unique properties and as time goes on and the implant site heals new tissue forms filling any available space in addition to that motion related to cardiovascular activity and head movement introduce added complexity so to start addressing some of these challenges we're engineering materials using feedback from biology this may involve mechanical characterization of tissue or analysis of interactions at thread tissue interfaces much of this characterization is even done during surgery itself by using custom hardware and software that modifies our surgical robot to double up as a sensitive characterization tool we then use the data collected and feed it back into optimizing our materials so that they behave mechanically chemically and as shown here structurally just like biology we've come a long way from our humble first brain proxy shown here sitting on a plate and consisting of agar and a pyrofoam sheet and while simple it allowed us to perfect robot insertions through countless bench shop tests today our proxy is slightly more complex where we've upgraded to a composite hydrogel based brain proxy that better mimics the modulus of real human brain we've also Incorporated a duraproxy and developed developed an injectable soft tissue proxy that so far has allowed us to perform benchtop mock explant testing we have a super long wish list for our proxy of the future but some of those items include a surgery proxy with integrated soft tissue brain bone skin or even a whole body a brain proxy that simulates motion vasculature and electrophysiological activity and a biological proxy to test biocompatibility and electrical stimulation there's a ton of ongoing work getting us closer to our proxy of the future including work on lab-grown cerebral organoids as shown here and all of this will get us closer to a future where we learn more and iterate faster on benchtop and reduce our Reliance on animal models or even when they replace them completely and with that I'll hand it over to Dan who will be presenting a very exciting Next Generation application thank you thank you Leslie my name's Dan and I came to work at neurolink after following a career in visual Neuroscience research I was inspired to join this company because I saw in our device the potential to restore Vision to people rendered Blind by eye injury or disease there are a number of particular characteristics of our device that make it uniquely suited to this application firstly as well as being able to record from every channel we can stimulate neural activity in the brain by injecting current through every channel this is important because it allows us to bypass the eye and generate a visual image in the brain directly secondly our device can have an enormous number of electrodes for a visual prosthesis this is important because the more electrodes you can have the higher density of an image you can create in the brain thirdly thanks to our robot we can insert these electrodes deeply into the brain now this is an important thing for a visual prosthesis because the human visual cortex is buried deeply in a fold in the medial face of the brain called the calcarine sulcus in this image I've highlighted the calcarine Cyrus in red in an MRI it contains a map of the visual World visual field it's about the surface area equal to a credit card on each side and if you unfold it and flatten it you see that the image is inverted it's upside down but more interestingly it's mag it's distorted so that the central part of the visual field the fixation point is greatly magnified so for example if you look at this image of Lincoln if you look directly into his right eye everything to the left of that fixation point is directed to your right visual cortex and everything to the to the right goes to your left visual cortex his eye even though it's very small in the image is magnified in the brain to occupy nearly a quarter of the surface area of the visual cortex over the last half century visual neuroscientists have developed a profound understanding of visual processing in the brain what's Driven most of this research is recording from single cells in the cortex usually of macaque monkeys one of the seminal discoveries was that every cell in the visual cortex represents only a tiny part of the visual field your perception is made up of a mosaic of tiny receptive Fields each belonging to a single cell in your visual cortex so if you record from one of these cells in a monkey say in this location you can find a very tiny region of the screen where a light stimulus will cause modulation of that neuron another location in visual cortex will have a location Elsewhere on the screen in this case in the lower visual field these regions are called receptive fields we've inserted our device into the visual cortex of two rhesus monkeys whose names are code and dash that means we can record activity from their visual cortex generated by their not their normal home environment as they roam around but as we all know monkeys love banana smoothie that means we can easily teach them to fixate points on a screen and reward them we can reward them very precisely because we can track the location of their eye using an infrared camera one of the things this allows us to do is to plot the receptive fields for every neuron that we can record with a single device now we do this by showing the animal a movie of random checkerboards whilst you fixate steadily on the screen then we take only the frames of the movie that generated a response in the cell and averaged them all together this is a technique known as reverse correlation it's generally used quite widely in visual Neuroscience for this purpose and this is an example of a receptive field plotted with this technique the central cross is the fixation point and you can see the little red and blue regions of excitatory and inhibitory receptive field these regions give cortical cells some of their characteristic properties and record all receptive fields from all the electrodes at the same time
and if we take all these receptive fields and accumulate them together overlap them and place them on a on a computer monitor for scale at a typical viewing distance you begin to get an idea of how much the visual field we can cover with this preliminary device many of the receptive fields are close to the phobia so close to the fixation point and that's partly due to the magnification I talked about with the phobia but there's also a scattering of fields in the periphery these are from recording sites deeper in the brain in the calcarine sulcus so far I've only talked about recording information from the cortex but to produce a visual prosthesis we need to stimulate so if we stimulated the cells whose receptive fields are in this location we would produce a perception of a flash in that location that only the monkey can see how do we know that the monkey sees it how do we know what it looks like well unfortunately we can't ask them what they see but we can train them to tell us something about that phosphine we start by training the monkey to fixate a central point on the screen like this white dot and we start by presenting real visual stimuli on the screen and rewarding the monkey for making eye movements toward those stimuli so here we flash a white dot and the monkey makes an eye movement towards it symbolized by the Green Arrow we then choose another random location and reward the monkey for making an eye movement towards it once he's got good at this task we can begin to interleave these real stimuli with electrical stimulation of electrodes and produce a phosphine the monkey sees the Flash and naturally makes a card towards it this tells us not only where in the visual field The Flash occurred but we can also change the current that we inject in that electrode to see how often he makes that's the card and noticeable or how big perhaps the stimulation phosphine is that we're producing look at code performing this task I want to show you first at one quarter speed uh there's a visual Flash and he makes an eye movement towards it we the monkey can only see what is white on this screen he can't see his own eye movement and you can't certainly can't see when we stimulate but here we stimulate and he makes the same circad to the same location because we stimulated the same electrode nothing appears on the screen at that time and he has no other cue to make that eye movement let me show you this in real time you can see monkey monkeys like to work very quickly and when we stimulate he makes that's the card in real time and looks like he's had enough so what I've shown you is a way to produce a phosphine in the visual field this is not something new in visual Neuroscience but if you think about that phosphine as a single Pixel in a visual image all we need to do is scale up and produce a great many more pixels and have them covering the visual field this is a schematic of what a visual prosthesis using our end device might N1 device might look like a camera the output from a camera would be processed by an iPhone for example which would then stream the data to the device and the image would be converted into a pattern of stimulation of the electrodes into visual cortex with a thousand electrodes we might be able to produce an image resembling something that you see there on the right but as Avinash told you our next generation of the device will have 16 000 electrodes if you put a device on both sides of your visual cortex that would give you 32 000 points of light to make an image in someone who's blind our goal will be to turn the lights on for someone who's spent decades living in the dark thank you thanks very much I'll pass you over to Joey who's now going to talk about another very exciting application of our device thank you Dan so my name is Joey I'm an aero engineer and I'm the head for the next gen team at erlink so for persons with spinal cord injury the connection between the brain and the body is severed the brain continues functioning normally but it's unable to communicate with the outside world you've already heard about how we can use the N1 link as a communication prosthesis to help someone with spinal cord injury control a computer or a phone but it can also be used to reanimate the body let me show you how first a little neuroanatomy movement intentions arise in motor cortex and are sent down long nerve fibers through the spinal cord these are upper motor neurons in the spinal cord they synapse that is make a connection with another motor neuron a lower motor neuron which sends these movement intentions to the muscles which contract and in turn you have movement while of course there are many other circuits involved in voluntary movement you can think about the spinal cord as many pairs of these two connections and in spinal cord injury one of these connections is severed unable to make the muscles contract let's Zoom a little bit further so here you can see on the left across a cross section of the spinal cord with a fiber coming down schematically this travels through the white matter tracks this is the upper motor neuron and then it synapses within this butterfly shaped region of gray matter in what's known as a motor pool in the motor pool the lower motor neuron descends out the ventral roots to the muscles which contract and then the sensory consequences of those movements for example the touch of your hand against an object returns of the spinal cord through the dorsal roots and Ascend the spinal cord up into the sensory regions of the brain again in spinal cord injury this connection is severed if we could place electrodes into the spinal cord say in a motor pool adjacent to lower motor neurons we could stimulate those neurons activating them and in turn causing the muscle to contract and movement to occur but this is very hard to do the spinal cord is quite delicate and it moves significantly within the Bony spinal canal this could cause damage to the electrode it could cause damage to tissue or both but our electrodes are small and flexible and our robot is able to insert them deep into tissue perhaps all the way down into the ventral horn spinal cord and so we have done just that here you can see A View From the R1 robot it's a targeting View and we've placed electrodes across many millimeters of the spinal cord and the R1 robot is able to insert those electrodes deep into the ventral horn into motor pools in very close proximity to lower motor neurons this is important because it allows them to have a localized connection to those neurons and activate very precise movements now to track movement it's very common to use motion capture markers like you might see in the production of a movie these can be placed with a light adhesive and you can see me placing these on my hand we're going to use these markers to let us zoom in on movement in the next couple of slides okay so here's a pig walking on a treadmill and you may have seen something like this before in a previous knurling presentation but unlike before this pig has more than one neuralink device there's a device in the brain but there's also one in the spinal cord and we can stream neural data from this device these devices in real time and use them to do things like decode the movement of the joints of the pig so here you can see on the left a Time series of the hip knee and ankle and we're decoding those those movements so this is super cool but that's actually not what we want to do we want to go in the other direction we would like to stimulate the spinal cord and cause movement to occur okay so let's do that so here's a pig a happy and healthy Pig doing what pigs like to do which is root around for food and snacks and as you'll see on the floor there's a blue square this is a voluntary engagement Zone where the pig places itself indicating that it's comfortable to receive stimulation when it's in the zone we stimulate and if the pig leaves the stone we'll stop stimulating uh and as before you can see we're able to track the position of the joints and also stream neural data as well okay so let's stimulate an electrode so here's one electrode on one thread that when we stimulate causes a flexion movement of the leg so on the left you can see the movement of the joints and you can also see the time series of the stimulation pattern in yellow so the leg is moving up here's another electrode which when we stimulate causes an extensor movement this is actually a little harder to see because the leg is straightening and the hips are shifting but if you look carefully you can see how this is the leg is moving we can stimulate on a great variety of threads and produce different movements and actually sequence them spatial temporarily to provide patterns so on the left you can see a Time series of different stimulation on different electrodes you can see the movements of the joints and on the right we're zooming in on muscle activity that gives us an idea of the kind of strength and power and specificity of those movements as well so in addition to doing sequences we can also achieve sustained movement these are powerful muscle contractions of the sort that you might need for standing or other load-bearing activities and are really crucial for interacting through the world okay so stimulating the spinal cord is only one piece of the story you also have to get like command signals for the stimulation of the spinal cord unfortunately we have a way to do that we have the N1 link that you've already heard about placed in motor cortex how would that work so we place threads in motor cortex and record spikes these spikes would be wirelessly transmitted in real time and decoded into patterns of stimulation stimulation would then be delivered to the ventral Horn of the spinal cord to the appropriate motor pool for the muscles that we like to activate we then stimulate activate those lower motor neurons which causes the muscles to contract and movement to occur now of course movement without sensation is actually kind of difficult just think about what it would be like to try to move your limbs if they're numb but we can also get sensory information as well so the sensory consequences of your movement can be recorded in the dorsal Horn of the spinal cord in the form of spikes for example here a feather touching the hand these spikes can in turn be decoded in real time sent to patterns of stimulation to either the same and one device in the brain or perhaps a different one in a sensory area stimulation of that part of the brain would cause percepts of touch and proprioception closing the loop so putting those two Loops together we have motor intentions decoded from the brain used to stimulate the spinal cord causing movement and then the sensory consequences of those actions being recorded in the spinal cord to stimulate the brain causing perception now we have a lot of work to do to achieve this full vision but I hope you can see how the pieces are all there to achieve this and if you find this Prospect as exciting to you as it is to me I hope you'll consider joining us here at neurolink [Applause] so foreign [Music] it would also be great for the scientific Neuroscience Community to access some of these tools do you have any plans to make these available to neuroscientists yes yes we do um so um that's a great question is uh I think there's probably a lot that could be figured out if we provide the uh surgical robot and devices to Neuroscience uh research departments at your universities and hospitals so I think at the point of which we have we need to be in production with the machines and obviously have the FDA approvals but I think uh it would make a lot of sense to provide this to research universities and hospitals follow up the question is uh of the data that we have the data sets that you've collected are there any that you plan to open source for the scientific community yeah I think that would be that would be fine I think uh yeah sure absolutely because I think it could be really interesting for people uh uh to build upon that um and mail and build Foundation models for the brain yeah that's it's a good point yeah like actually no problem with uh just publishing it on our website you can use it if you want looking forward great thank you for the very wonderful presentation so I have one question so as we will know for implantable electrode either for stimulation or recording after we implant the electrode Scar Tissue will grow around the electrode and especially for uh recording the signal we get will become smaller and smaller after long-term uh implant uh how do you solve this issue uh so uh for context I'm Zach I leave the microfabrication team on brain interfaces uh I don't think we can solve it specifically but uh one thing one advantage we have is both the flexibility and the small size of our threads to try to limit that scar tissue and that damage and uh some future work that we have started working on that we'll continue working on is pushing the size of the threads down um just to try to limit the immune response and really limit that scar tissue growth uh I actually have I want to follow up so do you think it will be helpful to actually load some drug on on the surface of your electrode or some other way well I think like maybe the just the question is like what what sort of signal degradation have we seen over time um and uh you know basically just does it does it work a year later does it work two years later um it does so yeah yeah so uh that's a good point so in terms of thread longevity specifically um really the gold standard that we can use to assess is the data we have from our animal participants uh and so for that uh I'm not sure if it was mentioned before but uh the longest data we have right now is for an animal participant who has 600 600 days with useful functioning channels where we were doing uh something useful with the the signals for BCI uh and then with the newest version of our device we have uh sort of a collection of participants who are at or near one year of data and uh completely useful functioning BCI from that as well thank you if I may add one more thing so you mentioned uh potentially having drugs to kind of reduce inflammation so one of the things that we are actually actively working on is having some sort of biological coding to either reduce inflammation or make them slippery so you know you mentioned uh you heard from the presentation that one of the challenges that we have is removing the threats from these neon membrane tissues that are formed after implantation so there are programs like that where we're really looking at kind of incorporating some of the learnings from biology and these Coatings into our thread so that we can hopefully reduce inflammation as well as make it easier to extract we're also continuing to reduce the size of the the electrode so as when the electrode gets really small there's um the sort of inflammation response are Scar Tissue becomes minuscule so it's like a very very tiny electrode the body basically ignores this is really impressive congrats to the whole team um so as as you of course know one of the problems with current electrodes is they're rigid and they move around so you have these neural nonstationarities and I think many of us had hoped that with these very thin threads they would maybe move more with the brain and you wouldn't see that but from the data you showed over many hundreds of days there was a lot of variability so can you speak to how much do they move and do you have any idea of like why does it move can you stop it from moving is it how stable are the signals hour to hour and day to day hi I'm Bliss I'm one of the leads of the software groups in the brand interfaces team in the particular plot you were mentioning before what we were showing was the average firing rate recorded per day on a particular Channel it's as you well know pretty complicated to understand if you're recording from the exact same neuron day after day after day it could be for example that you're actually picking up a different neuron day to day and that's why you get the change in firing rate we don't think this is at least the majority cause of the situation here the reason is that if you look at sort of the spike shapes day to day even when the average firing rate is shifting a lot you still see sort of stable Spike shapes that's obviously not a fully bulletproof story but at least gives some confidence that it's not actually different neurons you're picking up however there's still is very much a chance that that could be the case in at least some part of the of the robustness now stationary story yeah cool thanks yep thanks question yeah to be clear that like the Electoral position is actually fairly stable um because you've got these very tiny basically very tiny wires with with uh that and then there's some play in the like you've got you've got the advice attached to the skull ritually but then you've got this this long so tiny wire with kind of a a coiled section so it's a it does tend to basically stay in the same place couldn't erling help realize well I mean it's uh um once you're in there you know there's a lot you could do um so um you can obviously measure temperature so you could do very early detection of a fever you could not measure uh pressure I think you probably detect that um at the very early the very beginnings of a stroke because you can see sort of like electrical signals starting to go sort of hair wire so there's actually probably a lot of um just General Health monitoring that you could do once you're in there you know and and with with very simple sensors hi um you guys all did a great job of distilling a lot of complex engineering and Science and making it wonderfully clear so great job um I wanted to ask a little bit about the stimulation I guess for the phosphines and for the the evoked movement um how are you are you thinking is this more like local stimulation is it is it juxta cellular are you steering current around how many cells are you activating how much current are you using I'm just curious what the scale of this is and whether you have a lot of precision or a lot of you know pretty profound behavioral effects too hi yeah I'm Dan and um how many cells you stimulate with a single electrode is dependent on the impedance of the electrode size of the conductive pad how much current you deliver frequency all these factors so there's a great deal of variability that we can use to customize the the shape of a phosphine or the or the shape necessarily but maybe the the intensity of the phosphine we think with our current electrodes at least in code back of the envelope calculation would be something like about a 50 to 100 Micron diameter sphere of cells are being stimulated um in a visual system the smaller that sphere the smaller and more specific you can make a particular phosphate basically the smaller the pixel in in the image you can produce so there's
plenty of scope for customization of that there's actually also it's possible to get to a much higher like effective pixel count by um controlling the the the field electric field between the electrodes so uh it's not necessarily it's not a one-to-one relationship you get actually dynamically adjust the the field and simulate Farm have a have a very high neuron to electrode ratio so try to like could you get like um you know maybe 10 to 100 to 1 potentially so it's a megapixel type time basically can you see it normally but I think people would want to know that and I think that is one of the one of the possible outcomes pylon this is amazing oh can you talk about the longevity of the implant itself also how would the material of the implant would react with the brain tissue or density of the bone or bone structure thank you happy to talk about this I'm Jeremy an engineer on the brand interfaces team and I think it's good to start with data so like Zach mentioned we have an implant that was you know a monkey was performing BCI for 617 days and that was pager before being upgraded to the latest device uh for our current version of the device it's lasted for almost a year and then for Accelerated lifetime tester that Josh kind of talked about we have data from our implants from the previous version eight years of accelerated time and from the current version four years of accelerated time and Counting so that's kind of starting with the data those devices are still lasting and still going um theoretically there are kind of three fundamental factors that contribute to the longevity of the device one is going to be the seal the Hermetic enclosure of the device two is going to be the battery and internal electronics and then three is going to be the threads that Zach talked about a little bit and the channels being able to functionally record signals from the brain the seal we think will far Outlast the other two in terms of the bottlenecks so the seal just theoretically I think Josh mentioned that it is a thermoplastic polymer material so there's going to be a very small amount of moisture that diffuses through it over time and we think that that will last you know 20 plus years easily in terms of just that property and like I said we have not seen our seals fail with our current version of the device yet so we haven't really pushed the limits here for the battery and internal Electronics that's really based on usage and how much runtime you want and we are working currently on getting data to project out even farther but right now we believe that we can you know achieve 80 run time at the three year time point which would be about you know three and a half hours for a four hour run time but we're like Avinash mentioned we're we're doubling that very soon and quadrupling is what we have plans to do that so the internal Electronics really aren't the bottleneck either and so really we're attacking the threads themselves and longevity of those channels that Zach Zach can kind of talk about some of the improvements that we're doing to increase that longevity cool thanks yeah so uh as sort of mentioned before we don't necessarily have an end point as Jeremy said for uh the testing of the threads that being said we are focusing on longevity because we think this is an important uh issue to solve uh so one thing that we're doing in parallel with the current device is aggressively pursuing uh silica amorphous silicon carbide insulation of the threads which we believe will take us well beyond five years uh of longevity but of course still to be tested and in parallel with that we're just starting to look at Atomic layer deposition which we think could even push longevity of the threads much further and deposit very thin layers to keep the flexibility of the threads and that Advantage there so along with that we're also of course having to design and validate very robust benchtop testing to model uh really in Vivo conditions and look at Channel degradation so that's what we're looking at for longevity of the threads and then I think you asked about biocom pump and I think for biocomp uh essentially all the materials we're using right now I can say or at least biostable and we send out testing for biocompatibility very often and essentially what we're doing is we're using in many cases known materials from literature that academic Labs have already started to look at and sort of jumping on that and using that as a starting point thanks for answering that cool so we have another question from Twitter this is from David and he asked the team what are the biggest lessons you learned since the previous presentation it's been about two years I'm sure there was a lot of engineering done so yeah anyone want to answer what we learned in the last two years so uh one thing that we've learned in the last couple years is just how much the brain moves um on the human scale compared to you know when you start small when you make brain proxies and a lot of research starts with rodents the brain does not move that much then you get a human and the Brain can move like hundreds of microns or more and when our threads and needles are so small that motion when you zoom in looks like a mile I think to add to that um one thing is how I guess Dynamic the implant environment actually is so we've talked about like when those implant site heals SCAR or new tissue might grow and fill in the space and that'll affect like how our threads might interact in that space so that's why we've emphasized so heavily the importance of Designing accurate proxies so instead of having to wait months for an implant site to heal you can hopefully learn that information in hours I'm Alex on the robotics team I think one of the things we've definitely learned within the engineering teams is the importance of really continuous validation and testing where we're building say motion systems that are precise to single digit microns we need validation and test systems that we trust even more than that to prove that they work reliably and putting just as much focus into those validation and test systems and designing those alongside our products I think is one thing we've definitely learned thing and I don't think that we learn I think as part of BCI or the pain control and algorithm is that again building a prototype and making it work with only one monkey one pager was a great maybe a success but also relatively easy to making it work every day for all the other monkeys so actually making it a product is something that it's not easy but we are learning how to do it I mean I've learned that the brain is really squishy like way squishier than you think it's not like uh you know cauliflower or broccoli or something like that it's more like water balloon uh and then it's moving in your skull like a lot so get a squishy water balloon in a coconut is maybe a good way to think of it hello um given Bluetooth bandwidth limitations have you considered other Technologies for wireless communication foreign hey yeah I can take the first part of this question and then I'll let Matt to the second part of it um it's a great question especially as you think about how to increase and scale the number of channels that we want to record from this becomes increasingly a bottleneck for the kind of work that we want to do uh we're thinking about this in a couple ways one is just directly improving the underlying radio interfaces and I'll let Matt talk about that in a second the other way we're thinking about this is how can you be more efficient with the data you send off the implant and I think the first version that is compression so just taking your data looking at the characteristics of it find out a way to represent it more efficiently and just send off that compressed Stream So for reference right now our Bluetooth bandwidth is around 150 kilobytes per second the compressed stream of data that we send off the implant is around 50 kilobytes per second so we're doing fairly well there so far but when you start thinking about 16 000 Channel devices that won't get you all the way there so some other things that can help on the compression slider to actually just send out the output of the machine learning model rather than the input required to actually run it uh so one thing we've been trying in the background here is called decodon head which is essentially taking the machine learning models right now we're running on MacBooks that our monkeys are gaming on and moving those to actually run on the implant and this is like a super cool engineering problem if you want to talk about how to make uh complex neural networks run on what is the equivalent of a garage door opener come talk to me it's fun um yeah so that's another way to solve this problem is basically do the computationally Intensive work to just get the raw signal that you actually care to use to control something and then send that thing out of the of the implant on the radio side I'll hand it over to Matt yes so to answer your question we are looking at other radio Technologies um one in particular is uh 500 megahertz pan with ultra wideband at a couple different frequencies so this has an advantage in terms of the bit rate that you can achieve it's um on the order of six to eight to ten megabit uh there's also a latency Improvement that's quite substantial and there's also a another wireless technology that we're looking at at w band hi uh thank you all for really clear and compelling presentations um something that struck me in one of the earliest talks I think it might have been DJs was this vision for the ability to acquire new complex skills via these bcis um like the ability to perform Kung Fu and that reflects the fact that the brain is fundamentally a learning machine and yet the um many of the Technical Solutions presented later framed were framed in such a way as to try to correct for the way the brain changes over time over longer time skills drift over the course of days or um the way that the tissue might heal over time I was curious what your vision collectively was for developing out this technology that interfaces with a fundamentally plastic system that changes in complex ways over a variety of time skills days months years yeah it's a tough question yeah I think it will be kind of maybe bi-directional learning in some way in the sometimes skills that we will might fix our algorithms and we prefer to have like more stable kind of performance but of course if the over time the person in the brain will learn how to use better the bciable need to update our models so they will be kind of in the interactive kind of relationship in some way um to learn even new tasks this probably yeah will be something will over time will need to learn what the person kind of learn how to interact with the computer and then build the appropriate interface the ux and also the UI and build algorithms that will help him to control what we want just one thing I'd like to add on to what near said um yeah it actually is an advantage in some ways that the brain is plastic and learns and that can help us because we actually have to do less work in the the human in the loop we'll actually learn how to use our device better but one of the advantages of our particular approach in device is that we are trying to do an extremely high Channel count device so we can you know uniformly distribute electrodes over a functional region and then it doesn't matter so much whether things move or shift over time we can offload that to software and so we can build algorithms that change over time as well and so both those things are actually I think advantages to our particular approach we have another question from Twitter Juan wants to know what career path do you suggest for somebody that is just getting out of high school if they want to work at neuralink in the future well it's really any of the skills that we described uh so I mean we're developing uh new Chips uh you uh this material science uh you know as uh software obviously uh animal care because it's a really all the things that were listed in the in the neuralink slash careers okay that'd be a good guide I'm actually very fond of saying um you know when you flip through any college uh like booklets and look through all the majors I think you can point to every single one of those majors and there's someone at this company who either is an expert or you know have majored in that so it really is um truly truly multi-disciplinary Endeavor and I think you know just focus on whatever your um you know passionate about or whatever you're talented at and then just you know pursue that as deeply as you can and then there's definitely going to be a place for you in uh you'll end up building neural interfaces hey um we got to see the monkeys doing telepathy but could you say a little bit more about the animal behavioral training kind of their lives and day-to-day processes sure I'm Autumn I am head of Research Services which includes our Animal Care Program and as an animal welfare scientist this is a topic that I'm deeply interested in so um our training program is outfitted mostly with behavior analysts who help us think about how to remove any of the potential aversives or frustrations from our training um we think about uh conditioning as the primary which includes positive reinforcement as the primary way to train um let's see what else can I share with you yeah yeah I mean that may not be part of the behavioral training itself but we think of Animal Welfare assessment in the framework of the three R's which is refinement replacement and reduction and so when we think about refinement behavioral training does apply in in that way and where we want to remove specifically in research restraint is one of the things we make a very top goal to remove so um you saw a lot of videos today where animals were walking up to their stations because we worked really hard to remove any requirement to restrain the animal um anything else yeah I can just yeah well just on top of the last point you said um just as an engineer here one of the things that is really inspiring and really cool about this place is that we do get to work on a lot of technological innovations that directly translate to Greater Independence for the animals when they're engaging in these tasks so as you saw you know monkeys charge just by voluntarily walking up to a branch they play games in their home habitat with a laptop computer voluntarily and the fully implantable fully wireless device the inductive charger all these things enable that kind of experience and so this is one of the very cool Parts about working here is we do get to innovate on things like that definitely helps to work with a group of Engineers who can like really make cool stuff for for monkeys to be able to do easier behavioral training yeah so I guess to answer the previous question about what you can study to be part of neural link I guess monkey engineering you can add to add to that Monkey Business hello my question is on uh upgrade upgradability which you guys mentioned quite a bit so in that procedure I imagine there's some kind of expand procedure and then you're going to putting a new set of implants so could you talk about the damage possible if any tissue damage from the X-Men procedure how long you have to wait do you imply in the same areas and uh what's your like brain scanning for the implant procedure in terms of upgrading it um I don't know how many questions I can ask um so I can start to speak to some of those so I I work a lot on upgradability in those x-plant processes and uh designing those to be better um the the the goal that we're working towards is that as I mentioned in the presentation it's really just as easy to upgrade an implant as it is to initially install um we didn't we didn't show many of those explant uh examples today um but we've come pretty close to just popping out an implant and reinstalling another one in the exact same location definitely definitely the goal is we are installing the implant in primary motor cortex which is a valuable area for interacting with a device like this and so we the goal is to implant in the same location maybe if you expand out to other applications then you'd be interested in moving somewhere else but we definitely want to be able to insert into the same area um in terms of damage the I think that the damage that we care most about is uh damage within the brain and what we've found and we we talked about that that challenge of the tissue layer on top of the brain and where I think we're well on our way towards figuring that out um but because of the thread small size the the sort of Scar capsule within the brain is so minimal that they are actually removed quite easily and so we see useful signals even on the second or third time that you've placed an implant and I think some of our BCI folks can probably speak to that we we do have uh monkey participants working with their second devices and uh really making use of those so one two questions one was somebody that asked the question about the brain Bean plastic have you noticed any plasticity from a behavior perspective from any of the monkeys or is it too soon to tell or there haven't been any observations from the monkey Behavior we see that it takes them a while to learn how to of course to turn on the task but also when they are implanted and it's relatively quickly for them to ramp up and get to a high performance of brain control with pager for example after a few days he was able to like three days already able to learn very quickly to use the device it was trained on the task force from this previous implant but with the new one he was after three four days he was able to control to a close to Performance of he he held with the previous implant but have you noticed anything on the adverse side which means the brain has outpaced the neural network that you're running it's hard to say um no not really okay so I have a another question uh which is more about the electrical side so you talked about 1024 uh channels being recording are you transmitting the raw signal or was it only the three Spike events that you were talking about the low mid and the high or is it the raw entire raw firm waveform that you transmit yeah hi I'm Julian I can speak a bit about this and maybe Avinash wants to contribute but um our chips see they're all signals but then when we transmit out uh typically spikes and we detect those spikes in real time on the chip this massively compresses the data um I guess yeah we're making improvements to that but we can request we can request rule samples sometimes we also process uh particular statistics or other data directly on the chip and then send out the calculated values um so there are many ways to sort of play with the data yeah so at least with the current and one system that we have which relies on uh ble radio there is a bandwidth limitations so you can't actually stream raw data from all 1024 channels but just kind of to give you a little bit of a history of how our compression algorithm the spike detection algorithm was developed we did have sort of a wire system I there was a paper that we published with the USB C connector that you know streams all those signals through a hype bandwidth wire connection so we did have kind of those development platforms to be able to see the raw signals and know which set of information that we want to extract that are you know going to fit within the bandwidth of the radio as well as is useful for BCI control and you know also just sending data wirelessly does cause a lot of energy so there's any opportunities we have to reduce that burden uh you know we try to do basically have all that compression closer to where the electrodes are as possible maybe one thing that isn't obvious is that the the actual uh bit rate that you need to control a phone or a computer is actually very very low so I think we might have the record for bitrate is that correct we think we do uh maybe so on the order of 10 bits per second so that's super slow um but if you think like if when you're inputting data into a phone like how fast your thumb is moving how many thumbs what's your thumb Taps per second it's pretty pretty low and um I mean like basically our thumbs are like two slow-moving meat sticks that we you know do this and it's like this is really a load it's like a low bar that's what I'm saying um so for at least for output it's it's a you get 10 best per second you're you're holding ass uh so and that's you don't need your Bluetooth anything for you so you could practically send it out with beeps and Buffs you know so it's not if you go if you're going like a high bandwidth visual now you're you know maybe going to megabit plus but it's it's so it's really well within Bluetooth uh or but anyway it's just we're that is what I'm saying is that's not that's not a constraint that the data rate um one of the sort of like maybe a notable item which we talked about in the presentation but uh we we think we can probably solve for doing the implant without cutting the the dura we can just do basically a bunch of holes through the door which is like the dura is like the big thick iron dry D thing that contains the that's up against the skull if you don't pierce the dura you know if you don't cut the door away and instead you have a bunch of tiny holes and insert the electrodes through the tiny holes into the rain um and then the recovery time is ridiculously fast um you know you're not really losing much in the way of cerebral spinal fluid it's it's you couldn't Theory we I mean this could be like a the whole thing could be a 10 minute operation like LASIK like it's fast it's not like a big laborious thing it's super fast just going back to the long-term use I'm wondering if you have any pathology looking at scar tissue from many animals that have had long-term implants and along that lines it seems like there might be a little bit of a gap between use in medical conditions and helping individuals from a safety perspective so I didn't quite catch the last question but um I Heard the first one I'll ask you to repeat the the second one so the first one is do we have pathology from from long-term use animals we absolutely do um we don't have any pathology from our monkeys which we upgrade and you know are still going we have uh other studies that are primarily to determine safety and so we do have histopathologic endpoints that we we determine the scar tissue formation around the threads themselves in the brain is typically negligible like it barely reacts to the threats at all um so that that's very promising in terms of the scar tissue formation over the the cortex so this neo-membrane growth that fills in the areas that Elon and Alex were mentioning that we remove with our current operation um those we we do you know evaluate that scar tissue but it isn't uh it doesn't pose a problem in any way it's not a continuous reaction to a foreign body it's just filling in tissue that was removed and if you could repeat the uh the second question I didn't hear that yeah the second question really following up on that seems like there might be a little bit of a gap in use and healthy and individuals from a safety perspective you know I think people mentioned that they might be interested in trying prototypes but just wondering what your perspective is on trying to lower the safety risks yeah it's a great question so in terms of really it's about the long-term use of the device so you know we have devices that have been implanted like I said in monkeys where you know for many years where we see no behavioral deficit at all so this first is a question of how you evaluate safety so you have histopathologic endpoints you can evaluate but we're also looking for cognitive deficits or behavioral deficits as well and we don't see any of those in our animals which is you know an important point in terms of the histopathologic endpoints they look really really great the challenge is one of explanating the device which is why we're putting so much effort into the reversibility efforts and our through Dura insertions so when removing the device that's when you potentially could cause damage and so we are doing we have a lot of ongoing studies right now to really minimize the risk of that but we don't think it's a substantial risk with our current approach and like I said pager was upgraded with the previous surgical approach and is doing great so clearly it is you know can be perfectly safe but proving that beyond a shadow of a doubt for humans is something that we're still working to do rigorously
to answer your question yeah thank you so uh thank you for a very deep dive on many of the different aspects of the device and the system it's very impressive to see all the engineering work that's gone into it um you just mentioned about bitrate as the prior bit rate holder um I can confirm you have indeed shattered my record so congratulations on I think I saw a peak of 7.4 bits per second well done um my question is actually around the clinical trials and the FDA to the extent that you can share I gather that that device removal or maybe electrode removal is one of the concerns that the FDA highlighted is there anything else you can tell us about what the FDA was concerned about or had questions about with respect to their ID submission yeah I mean we can probably talk a little bit I mean it's these are really challenges you know that we have broadly so uh explanation safety proving that right rigorously for humans is something that we it definitely is one Challenge and was something that that the FDA commented on um other things that they do ask some really great questions so other things involve uh you know things like the thermal bench shop testing of our implants so obviously it's important that our implant doesn't damage the tissue by overheating so having really rigorous and valid bench shop testing for that is very important it's actually Something That We're redesigned to be even more uh accurate now um it's also the case that you know they ask a lot of very hard questions on biocompatibility chemical characterization so we've done very rigorous testing for that but you know they they do ask a lot of questions about getting into the weeds of the data and making sure that there really is no chance for any toxic chemicals or bio incompatible materials to be in the brain so these are all things we're working with you know to uh to just prove again above and beyond uh Beyond a shot of a doubt one thing that's maybe worth mentioning here is that it can be difficult to appreciate the novelty of our product so the surgical robot and the thin film array in particular are quite new and unlike existing devices and this means that we can't rely heavily on literature to support the safety and efficacy of the device so we do spend a ton of effort in designing and Performing testing on our devices so that we can rigorously prove the safety of them and we can't rely just on another product or on some paper and that's something that we're not willing to compromise for our first human participant working very hard to do I think if you ask a question like um like in my opinion like what would I be comfortable in planning this in someone one of my kids or something like that if at this point like if if they're in a serious like let's say they um if they broke their neck would I feel comfortable right now doing it I would I would say we're at the point where I at least in my opinion it would not be dangerous um hey thank you for the presentation so I have a non-technical question um are you collaborating with people with motor disabilities and if so like have they shared any ideas of applications that they would be excited about production I can take the first part of this I'm not the best person to speak to this to be honest but there is a consumer Advisory Board we have made up of a number of people that have uh various conditions including tetraplegia and they give advice to us on a number of topics there's uh just as an antidote someone came to the office maybe six months ago and they were telling me what they most wanted to do with their neuralink device and there were two things that they said one was they wanted to be able to trade stocks day to day to be able to beat their brother and the second one was they want to be able to play shooter games so I think what was most shocking to me about that encounter was the normalcy of that and I found that conversation truly inspiring so you know who you are the person who came and talked with me um have a great day yeah yeah what something that's we've talked about but it's maybe I should be re-emphasized uh we are doing uh we're building up a production system for the devices so we're we're building up breaking out the production line making large numbers of devices we want to make thousands ultimately tens of thousands then millions of devices so the the progress at first particularly as it applies to humans will seem perhaps agonizingly slow but we're doing all of the things necessary to bring it to scale in parallel so Theory it should uh progress should be exponential okay so thank you that was a very cool presentation um so one of the stated goals was recording from everywhere in the brain being able to record from and perturb any location so it seems like currently it's all cortical um and I'm curious with the current device is it is there any sort of long-term goal or idea as to extending it into going deeper in the brain I mean for Neuropsychiatric disorders for memory all these things are much deeper several centimeters so I'm wondering what's the time scale if you were to give a very rough estimate of when I can expect to see a knurling product that goes that deep yeah so I mean the the fundamentals of the device in the skull will stay essentially the same because the I said earlier the the device in this call is very much like a like a smart watch essentially it's got It's a battery radio inductor charger uh computer and um and then you've got the the little wires and so you need to make the wires longer and you'd have to have a deeper insertion needle for the robot but it's this really is intended to be a generalized i o device so apart from the tiny wires being longer and the surgical robot needing a longer needle in theory you should be able to go anywhere because uh it seems to me that part of the robot is trying to detect where the blood vessels are and then avoid them correct would that be possible at that scale I mean certainly not just visually but maybe there's some other way of detecting it is that is that like is that a current goal and do you expect that within I suppose the next decade uh definitely yes I'm Ian I run the Robotics and surgery engineering team here um like of the three axes that DJ mentioned one of them is you know hack system Warriors of the brain so the robot team thinks about this a ton in terms of uh what sensors do you need to essentially go past the surface um and so in this case you're right that right now we can really only see down uh the maximum about a millimeter I think within the team there's questions of what's best to use next but like ultrasound and photo acoustic tomography are two that come to mind as things that can get centimeters deep essentially but it's a super interesting problem you sort of need Deep Imaging and some ability to steer to at least avoid large vasculature deep down yeah or if our if we can make our needles and threads small enough in a way that we can still be precise and accurate at a deep depth then maybe you don't cause a bleed if you hit a vessel yeah I think that's that's really the ideal situation if the threads are
really tiny they can actually go through a blood vessel um and it's and it's okay if they're tiny enough so so we wouldn't need the blood vessel Imaging in that case I I'm I actually am slightly optimistic that that is achievable and Matt could probably speak more into this but what DBS currently is kind of just like send it yeah the current approach involves a wire that you blindly pass in that's massive compared to our threads orders of magnitude bigger um and so that's a low bar for us to clear as well because people don't realize like for the Deep bright simulation just how big the hole is it's uh I mean what is it like I mean how basically in current deep brain stimulation how much of a borehole is growled in the brain yeah you're drilling a 14 millimeter Burr hole and then passing a two millimeter wire um you know six centimeters eight centimeters deep into the brain so all blindly hoping that you don't hit a blood vessel telling the patient up front this might be good for you and there's a one percent chance you're gonna your brain is gonna bleed in a way we can't control so that is current technology that is happening right now so doing better than that is we can we can definitely do way better than that there's no problem our needle is 40 microns yeah thanks again for the phenomenal presentation I thought it was fascinating how rapidly you could test all of these electrodes but it begs the question about like what your fault tolerance is if you run these Diagnostics and it comes back that you have something that's either shorted or high Z how many of those before you get degraded performance the second question is uh when you're actually inserting this device we saw examples of the electrode going in and then like looping back on itself but it looked like that was something that was assessed basically by slicing the synthetic material I'm curious what you're doing to validate the insertion of all these electrodes sort of in Vivo how do we know that that's not happening on an actual patient yeah I can enter that SEC the second one um so like I mentioned we can um so we weren't actually sectioning in that case we have a really cool micro CT um so I mean it's essentially like a CT scanner so that's just an intact proxy that we put in this machine and we can you know take a picture all the way through it um and like I mentioned before like we can make a proxy where it happens you know that that looping back happens every single time and then we can make one where it never happens um and we've pinpointed roughly now where actual tissue Falls in there and so our current plan for you know validating and confirming that is making proxies where it you know happens really easily much worse case than any you know any tissue could possibly be and then designing it such that it never happens in that scenario and that'll give us the doing that enough times and with a weak enough proxy um that'll give us the confidence that this isn't actually happening yeah and yeah and this is the next geneal we don't see this problem at all with the the current generation and I'll take a stab at your first question so to clarify you asking what happens if there's a fault on a particular Channel or something yeah that's correct yeah so um the nominal scenario is that basically the impedance will stabilize pretty quickly within the brain and even at that level we can record great signals we see lots of spikes and we can use that for BCI um because we have so many channels like a thousand now 16 000 later uh we can actually run our models with far less channels than we actually have so it doesn't matter if like one channel dies here or there like we can still do really good decode I'm not sure if we have official numbers on how many channels we need but it's like we have an order of magnitude more and the more we have about it but we can already do a lot with what we have all right maybe just one or two more questions yeah I have a question about your um very very long-term uh inspiration to have this high bandwidth communication with Advanced AIS so it seems like you know the advanced AI would need to understand the humans most complex thoughts and emotions and that's what neuroscientists are trying to do so do you have any Ambitions to tackle Neuroscience Beyond neuro engineering well I mean I think we're going to make the input output device and the software interface with it and I think probably um you know post suggestion earlier we'll try to open source as much as possible so people can take a look at it and I think there will be a lot of others that that build upon the work that we're doing um you know the same way that if you make a if you make a microprocessor or CPU or computer that people will write lots of software that runs on that computer um so but if you don't have the computer this is the software's mood so we're making the input output device with the computer and um and then I think probably there there'll probably be a lot of other organizations companies that that built um that build upon that Foundation so yeah um I mean one of the things that's uh I would sometimes wonder is that if you do have a whole brain interface and you can record memories um and really getting into Black Mirror stuff here but um this could be one of them I I also think it's worth mentioning an important point which is that neuraling didn't come out of nothing uh there's decades and Decades of research in the medical academic field that has really set the foundation for what is possible by putting these electrodes in parts of the brain and being able to read those signals decode it for mapping it to some application and um you know being being in Academia before before coming to neurolink um you know the I do think that there's a lot of opportunities for kind of the field to advance at a much rapid rate by having just better tools for observing the Dynamics that are happening and then engaging with it in a seamless way and I think it was Ian who sort of mentioned that you know it's almost as if like we're kind of building an oscilloscope for the brain which I think is like kind of a beautiful analogy of just giving us a bit more abilities into peering into the Dynamics and using those information learned that to I don't know hopefully understand like what makes us and how the brain works and you know of the whole champagne hello the presentation covered keyboard and handwriting based input methods how do you plan to develop an input model that will achieve much higher bandwidths for complex tasks in humans yeah this is a tough question and we start to explore this with monkeys as you saw we have like a multiple we train many monkeys on very different tasks it's still an open question that we're after I think hopefully once we get to our first participant it will be easier to investigate one of the options we are exploring as we showed is to the code handwriting directly this is a one a work that started Stanford and we are exploring here and trying to expand there's also a different uh in addition to just decoding different things from the brain we also try to provide the user different uh maybe user like interfaces for example we show different type of keyboards maybe also swipe and other things that can help increase the communication rate so we are kind of tackling those in two dimensions yeah just one other thing to add in that direction uh as pointed out by many people here so far this is a general i o system that you can sort of plug and play in different places in the brain there's other areas of the brain that can help increase bandwidth for example language wrist speech centers that can help you much more seamlessly communicate for example text if that's your main thing that you're trying to do yeah just I think just having this General input output device will just so gigantically improve our understanding of the brain is is hard to the words can barely Express like you know uh right now we're just guessing a lot of what's going on in the brain but if you have direct i o it's not normal guessing what would learn about the what we will learn about the brain with such a device in in wide use is absolutely or many orders of magnitude more than we currently understand so um I guess on that on that note uh thank you for coming and thank you for watching online [Applause] [Music] thank you