
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Builder.ai, once valued at $1.5 billion and backed by Microsoft, collapsed after it was revealed the company exaggerated its AI capabilities and financial performance, relying on human labor instead of AI. This deception led to bankruptcy, CEO resignation, and significant financial harm to investors, highlighting risks of 'fake AI' in the industry.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the misuse and fraudulent representation of AI technology by Builder.ai and other companies, which directly led to financial harm to investors and breaches of legal obligations. The AI system's development and use were misrepresented, constituting an AI Incident due to the realized harm (financial loss, fraud, violation of rights). The presence of AI systems is reasonably inferred from the claims and marketing, and the harm is directly linked to the misuse of AI claims. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]