Nvidia's Bill Dally said during the keynote address at the IEEE/ACM International Conference on Computer-Aided Design that the company is testing whether it can increase the productivity of its chip designers using generative artificial intelligence (AI).
Nvidia's ChipNeMo system began as a large language model (LLM) trained on 1 trillion tokens (fundamental language units) of data.
The next phase of training involved 24 billion tokens of specialized data, 12 billion of which were design documents, bug reports, and other English-language internal data, and the remaining 12 billion tokens were comprised of code.
ChipNeMo then was trained on 130,000 sample conversations and designs.
The resulting model was assigned to act as a chatbot, an electronic design automation-tool script writer, and a bug report summarizer.
From IEEE Spectrum
View Full Article
Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA
No entries found