Awesome Tesla bot demo! But next? It’s mind-blowing what’s next.
·
2 min read
·
Sep 25
This one-minute September 24, 2023 video demo is incredible.
End-to-end (from camera to actuators) task-agnostic neutral network control performing highly dextrous sorting.
No hard-coding.
Upon a (presumably text or voice) prompt to ‘sort’ or ‘un-sort’.
Here’s the link.
But next? Imagine . .
I suggested a year ago in these pages that we might see within a year or so (of then) an Optimus sat in front of a half-dozen components — it hasn’t ever seen before — and it just ‘guesses’ how to put them together grabbing nearby appropriate tools.
No special arrangements of parts or tools.
As a neural network AI engineer?
I expect that next, within 3–6 months,
And it will be a sight to see.
End-to-end neural nets
As with FSD version 12 it means you just train via simulation or video of people doing the same thing. Actually: just similar things.
No hard-coded ‘if then’ rules.
At run-time the bot makes a guess of what to do next and how to do it. Based on the most similar past training at multiple spatial and time scales.
100s of times a second. Based on visual and proprioception (limb position) feedback.
But these guesses are like ChatGPT’s: highly intelligent.
And the guesses are occurring at multiple hierarchies simultaneously.
The neural net is deciding which two components to put together first. Which one to hold still. Which combination of motor axes to rotate or slide.
It’s incredible.
But it’s what us humans do 1000s of times a day.
But I love seeing technology doing it. After imagining it the last 40 years.
If un-orchestrated assembly of simple components is not on the minds of the Optimus team near-to-next?
I’ll be shocked.
Because that will be the Optimus ‘ChatGPT’ moment, even in video demo.
BTW, there are already Dads showing they’ve DIY-built their own ‘androids’ that can do the same thing. Not for sale.