Enjoyed reading this, very relatable. For me, it felt like the wrong time to write about AI when we were in the throes of the hype cycle. I'm not that impressed by the tech. Second, it is a waste of time to spend one's time thinking and writing about something that one is not onboard with, in a mode of negation. I wasn't sure writing about AI was the best use of my time. But I've come to realize that I do have some things I want to share about my experiences and the knowledge I've acquired about AI. To use a beach metaphor, some waves you can duck under and let them pass over you, some you can turn sideways to minimize impact, but some are upon you before you know it and you jump to meet the crest of it. That is this moment. I believe that people are now ready to hear what I and others have to say.
I think this is very true. It feels like we're reaching the end of this particular hype cycle, and it's time for the conversation to start shifting. That's part of why I feel comfortable writing now, too. I didn't think I would be heard over all the shouting. Now, it feels like the world is ready to think more deeply about AI and listen to more voices.
I hope you decide to write something! I would be very interested in what you have to say.
Welcome to the realities of meta physics, upon taking a deap breath and pondering modern realities, a lot of what we think of, internalized or not, becomes a reality whether virtual, abstract or eventually conctrete reality, thoughts of a scientist.
This could be a topic without end. Who gets to decide who gets to be an AI expert? Who gets to decide who gets to decide who gets to be an AI expert? Doesn't it take one to know one? Does it matter? What does it mean to be an expert in a field that renews itself every 18 months?
Now, the other side of the coin: Everyone is being told to jump on the bandwagon or be left behind. VC funding has dried up for anything but AI. Maybe Generative AI even.
Big tech has literally stopped hiring because "let's see what these agents can do".
I think people who got a specialized education in AI between 2015 and 2020 are now feeling as if their opportunity is being robbed from them by opportunists and snake-oil salespeople. In every industrial (re-)evolution, these people do appear and they help sell the new wares, but there is also a lot of room for anyone to adopt a beginner's mind and learn.
Sooner or later, people who learn and relentlessly do the work do not need to claim anything; their actions get them recognition, whether they want it or not.
I got invited to participate in several podcasts to speak on the topic. I declined.
The irony is that I studied NLP and expert systems in college more than 25 years ago (WOW!). It was not my foresight, it was the foresight of my teachers.
I have learned a lot about AI over the past 10 years or so, and about Generative AI over the past 2 years, but I would never call myself an expert. I just enjoy knowing there is much more that I can learn about it, and that's where my satisfaction comes from.
Loved this post. Reminds me of thoughts I've had increasingly regarding LLM usage online: as LLMs get better and better at rewording our thoughts, they're also increasingly useful at reorienting how we're thinking. Good for divergent thought, bad for initial genuine expression online.
It feels like an artificial bar is being raised for written content online (not just grammar and style, but how thoughts are being expressed). This makes some online interactions feel even more superficial than they used to. Again, good for some transactional things online, bad for genuine connection.
Responding to emails you suspect were written or influenced by LLMs seems to encourage responding using an LLM. Fine for some interactions, but tedious and dull in practice.
As far as who gets to be an expert -- if we continue with this rate of progress, being an expert will equate to just telling your agent/bot how you want to be perceived online, and automating every other semblance of your existence on the web.
What remains is an insatiable desire to be human, and the journey we're collectively taking by building and utilizing this tech. Everyone has a right to an opinion on AI given the scope and implications for humanity.
Excellent. Thank you for this.
Enjoyed reading this, very relatable. For me, it felt like the wrong time to write about AI when we were in the throes of the hype cycle. I'm not that impressed by the tech. Second, it is a waste of time to spend one's time thinking and writing about something that one is not onboard with, in a mode of negation. I wasn't sure writing about AI was the best use of my time. But I've come to realize that I do have some things I want to share about my experiences and the knowledge I've acquired about AI. To use a beach metaphor, some waves you can duck under and let them pass over you, some you can turn sideways to minimize impact, but some are upon you before you know it and you jump to meet the crest of it. That is this moment. I believe that people are now ready to hear what I and others have to say.
I think this is very true. It feels like we're reaching the end of this particular hype cycle, and it's time for the conversation to start shifting. That's part of why I feel comfortable writing now, too. I didn't think I would be heard over all the shouting. Now, it feels like the world is ready to think more deeply about AI and listen to more voices.
I hope you decide to write something! I would be very interested in what you have to say.
the answer can trace back to Bitcoin whitepaper: proof of work
Well written article Anna.
"AI didn’t need to go rogue to kill us. It only needed to accelerate global warming."
You seem to be suggesting that this isn't an issue anybody needs to worry about.
How can we confirm?
"One wrote a paper about making neural networks more efficient, and the other designed faster, energy-saving information systems."
Why didn't you identify the professors who solved these problems?
I think people would love to know the people who've staved off what was previously considered imminent catastrophe. I know I would.
Welcome to the realities of meta physics, upon taking a deap breath and pondering modern realities, a lot of what we think of, internalized or not, becomes a reality whether virtual, abstract or eventually conctrete reality, thoughts of a scientist.
Good take.
This could be a topic without end. Who gets to decide who gets to be an AI expert? Who gets to decide who gets to decide who gets to be an AI expert? Doesn't it take one to know one? Does it matter? What does it mean to be an expert in a field that renews itself every 18 months?
Now, the other side of the coin: Everyone is being told to jump on the bandwagon or be left behind. VC funding has dried up for anything but AI. Maybe Generative AI even.
Big tech has literally stopped hiring because "let's see what these agents can do".
I think people who got a specialized education in AI between 2015 and 2020 are now feeling as if their opportunity is being robbed from them by opportunists and snake-oil salespeople. In every industrial (re-)evolution, these people do appear and they help sell the new wares, but there is also a lot of room for anyone to adopt a beginner's mind and learn.
Sooner or later, people who learn and relentlessly do the work do not need to claim anything; their actions get them recognition, whether they want it or not.
I got invited to participate in several podcasts to speak on the topic. I declined.
The irony is that I studied NLP and expert systems in college more than 25 years ago (WOW!). It was not my foresight, it was the foresight of my teachers.
I have learned a lot about AI over the past 10 years or so, and about Generative AI over the past 2 years, but I would never call myself an expert. I just enjoy knowing there is much more that I can learn about it, and that's where my satisfaction comes from.
Loved this post. Reminds me of thoughts I've had increasingly regarding LLM usage online: as LLMs get better and better at rewording our thoughts, they're also increasingly useful at reorienting how we're thinking. Good for divergent thought, bad for initial genuine expression online.
It feels like an artificial bar is being raised for written content online (not just grammar and style, but how thoughts are being expressed). This makes some online interactions feel even more superficial than they used to. Again, good for some transactional things online, bad for genuine connection.
Responding to emails you suspect were written or influenced by LLMs seems to encourage responding using an LLM. Fine for some interactions, but tedious and dull in practice.
As far as who gets to be an expert -- if we continue with this rate of progress, being an expert will equate to just telling your agent/bot how you want to be perceived online, and automating every other semblance of your existence on the web.
What remains is an insatiable desire to be human, and the journey we're collectively taking by building and utilizing this tech. Everyone has a right to an opinion on AI given the scope and implications for humanity.