tgoop.com/abekek_notes/936
Last Update:
What a great interview. I haven’t finished watching it yet but from beginning I have a thought: right now is probably the worst time to do PhD/research in AI. It is probably an unpopular opinion.
Why? When I was applying for PhD last semester (I ended up not applying anywhere), I saw bunch of labs doing research in LLMs. I agree with Yann LeCun that LLMs have a huge problem: they lack essential capabilities for intelligent beings, such as understanding and reasoning about the physical world.
Yes, they can be great for replacing people in low stakes situations like customer assistance, or generating emails, etc. – that’s where bunch of startup ideas come from. But they rely solely on language as a medium for reasoning.
Think about your reasoning. Is everything you think about contains language? When you fill out your water bottle, and try not to overflow it, are you producing any language to guide this process? There is more vision and understanding of physics involved, rather than language. In fact, you would be able to do that even before you learn how to speak and understand language.
And there are more such problems with LLMs.
So going back to my take on doing PhD. If you do PhD in AI, I think it should be something in fundamentals of AI, physics informed neural networks, AI in science, etc. That’s where hype of LLMs is avoided (mostly), and that’s where the next step to AGI is.
https://youtu.be/5t1vTLU7s40?si=jYx-S0J1WWjGlluA
BY Abekek Notes
Share with your friend now:
tgoop.com/abekek_notes/936