5

AMD Likely To Offer Details on AI Chip in Challenge To Nvidia - Slashdot

 10 months ago
source link: https://slashdot.org/story/23/06/13/158204/amd-likely-to-offer-details-on-ai-chip-in-challenge-to-nvidia
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

AMD Likely To Offer Details on AI Chip in Challenge To Nvidia

Catch up on stories from the past week (and beyond) at the Slashdot story archive

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

AMD Likely To Offer Details on AI Chip in Challenge To Nvidia (reuters.com) 14

Posted by msmash

on Tuesday June 13, 2023 @11:20AM from the shape-of-things-to-come dept.
Advanced Micro Devices on Tuesday is expected to reveal new details about an AI "superchip" that analysts believe will be a strong challenger to Nvidia, whose chips dominate the fast-growing artificial intelligence market. From a report: AMD Chief Executive Lisa Su will give a keynote address at an event in San Francisco on the company's strategy in the data center and AI markets. Analysts expect fresh details about a chip called the MI300, AMD's most advanced graphics processing unit, the category of chips that companies like OpenAI use to develop products such as ChatGPT. Nvidia dominates the AI computing market with 80% to 95% of market share, according to analysts.

Last month, Nvidia's market capitalization briefly touched $1 trillion after the company said it expected a jump in revenue after it secured new chip supplies to meet surging demand. Nvidia has few competitors working at a large scale. While Intel and several startups such as Cerebras Systems and SambaNova Systems have competing products, Nvidia's biggest sales threat so far is the internal chip efforts at Alphabet's Google and Amazon's cloud unit, both of which rent their custom chips to outside developers.
  • IMHO NVidia's competitive advantage is in their software, not really their hardware. They don't own enough of the manufacturing process for their hardware to be that different from their competitors. While they have a slight edge in hardware now there is little guarantee that will continue. Their edge appears to be in the software ecosystem built around their hardware. This is where I think AMD will need to improve if they really want to take NVidia on.

    And I think AMD and other competitors will be able to traverse past NVidia's software moat in time, which is why I am surprised NVidia's stock is trading at such a high level.

    • Re:

      Watch out, you're going to be branded as an evil short seller by the slashcommie wannabes.

      • Re:

        Ha, I'm a firm believer that the market can remain irrational far longer than I can remain solvent. I think Tesla should be trading at 10% it's current valuation as well (which would still be almost triple Ford's revenue / market cap ratio) but I'm not looking to short them any time soon either. I'll stick with my low fee index funds and wait until retirement.

    • Re:

      Exactly. The only reason nVidia has an advantage is the CUDA library. AMD needs to get RocM running properly in Windows, and not just Linux.

      • Re:

        If only it even ran properly on Linux (hint: I'm on Linux, have an ADM GPU, and no luck).
      • Re:

        I'm on linux, I'm not a pro, but I might occasionally have to run CUDA code, so I went with NVidia (and I regret it every day with my half-working suspend). AMD advertises ROCm as easy to convert from CUDA and they provide nearly-automated tools. But if it's so easy that it (according to them) takes just a couple hours to port a project, they would only need a very small team to submit patches and port hundreds of OSS projects at github/gitlab/sourceforge/... Many OSS communities are happy with extending to

      • Re:

        AMD needs to get their cards relatively seamlessly supported in the major deep learning packages.

        nVidia had CUDA working pretty well when they were being written, so that's what got supported. Now AMD is playing catchup, so they're going to have to do a lot of that work themselves, or make some else really really interested in doing it for them.

        Docker containers and supporting specific versions of specific operating systems isn't going to cut it.

    • Re:

      Indeed. In a sense, it is already happening with some very impressive optimization people have come up with. It is even quite possible that specialized hardware will turn out to be essentially irrelevant as ChatAI seems to be strongly subject to diminishing returns for larger models.

      The NVidia stock price is just a mix of hype and "greater fool theory".

    • Re:

      The insight that much of Nvidia's current advantage lies in its software is likely correct. However, the million dollar question is why Nvidia's software moat has held up this long. It's not like AMD and other companies realized in the last few months that the AI picks and shovels were a big business. They've known this for many years and have tried to make up the difference. So, why hasn't that difference closed by now? And why would the difference close in the near future? In the last few years, it

  • There's marketing in there somewhere...

  • When it becomes available in sufficient quantity, approaches have usually been optimized and changed enough to make that hardware obsolete. One of the claims my CS 101 prof made 35 years back and I have seen it pan out time and again.

    I mean, people are currently already training the new models on normal PCs and it does not take forever. They are using the models on phones with some small restrictions and still reasonable responsiveness. This whole "AI Hardware" push is yet another hype by those that do not understand the tech.

    • Re:

      Yup, remember when you used to pay extra for an FPU? Lol. Nobody has those anymore!

      Also, vector processing units, dedicated graphics processing units, hardware interfaces....

  • nVidia even has some scare wording in their consumer grade GPUs that they "pose a fire risk" as compared to their datacenter GPUs.

    If AMD wants to kick nVidia's ass, they need to do three things:
    * Make a GPU roughly as good (it doesn't have to be better) than a 4090.
    * Make a version of tensorflow that works with it for linux, windows, and mac.
    * Give it gobs of memory. As in 24G or more.

    This last is super important as the key feature of the nVidia high end cards is not their performance, but their mem

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK