• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: July 13th, 2023

help-circle





  • Game industry professional here: We know Riccitello. He presided over EA at critical transition periods and failed them. Under his tenure, Steam won total supremacy because he was trying to shift people to pay per install / slide your credit card to reload your gun. Yes his predecessor jumped the shark by publishing the Orange Box, but Riccitellos greed sealed the total failure of the largest company to deal with digital distribution by ignoring that gamers loved collecting boxes (something Valve understood and eventually turned into the massive Sale business where people buy many more games than they consume)

    He presided over EA earlier than that too, and failed.

    Both of times, he ended up getting sacked after the stock reached a record low. But personally he made out like a bandit selling EA his own investment in VG Holdings (Bioware/Pandemic) after becoming their CEO.

    He’s the kind of CEO a board of directors would appoint to loot a company.

    At unity, he invested into ads heavily and gambled on being able to become another landlord. He also probably paid good money on reputation management (search for Riccitello or even his full name on google and marvel at the results) after certain accusations were made.






  • I think at this point we are arguing belief.

    I actually work with this stuff daily and there is a number of 30B models that are exceeding chatGPT for specific tasks such as coding or content generation, especially when enhanced with a lora.

    airoboros-33b1gpt4-1.4.SuperHOT-8k for example comfortably outputs > 10 tokens/s on a 3090 and beats GPT-3.5 on writing stories, probably because it’s uncensored. It’s also got 8k context instead of 4.

    Several recent LLama 2 based models exceed chatgpt on coding and classification tasks and are approaching GPT4 territory. Google bard has already been clobbered into a pulp.

    The speed of advances is stunning.

    M- architecture macs can run large LLMs via llama.cpp because of unified memory interface - in fact a recent macbook air with 64GB can comfortably run most models just fine. Even notebook AMD GPUs with shared memory have started running generative AI in the last week.

    You can follow along at chat.lmsys.org. Open source LLMs are only a few months but have started encroaching on the proprietary leaders who have years of headstart