Last week, Meta made a game-changing move in the world of AI.
At a time when other leading AI companies like Google and OpenAI are closely guarding their secret sauce, Meta decided to give away, for free, the code that powers its innovative new AI large language model, Llama 2. That means other companies can now use Meta's Llama 2 model, which some technologists say is comparable to ChatGPT in its capabilities, to build their own customized chatbots.
Llama 2 could challenge the dominance of ChatGPT, which broke records for being one of the fastest-growing apps of all time. But more importantly, its open source nature adds new urgency to an important ethical debate over who should control AI — and whether it can be made safe.
As AI becomes more advanced and potentially more dangerous, is it better for society if the code is under wraps — limited to the staff of a small number of companies — or should it be shared with the public so that a wider group of people can have a hand in shaping the transformative technology?
From Vox
View Full Article
No entries found