{"id":"d9edbdbc-8084-423b-b03a-4242bbbfff22","title":"2025 LLM and Generative AI Timeline","ack":"2026-01-06T01:45:03Z","created_at":"2025-12-31T12:24:25Z","saved":true,"shared":true,"branch_id":"00000000-0000-4000-0000-000000000000","profile":{"id":null,"name":null,"accessible":true,"subtitle":"Experimental research engine, expensive","model":"ki_deep_research","deprecate":false,"retired":false,"successor_model":null,"model_name":"Research (Experimental)","model_provider":"kagi","model_input_limit":1000000,"recommended":false,"model_info":"\n\n\n\n<div class=\"heading\">\n <div class=\"left\">\n \n \n \n <svg class=\"model-icon icon-xs\" data-model=\"ki_deep_research\">\n <use href=\"#icon-kagi\"/>\n </svg>\n <h3>\n \n <span class=\"model-provider\">Kagi</span>\n \n <span class=\"model-name\">none</span>\n </h3>\n </div>\n\n \n <a href=\"/settings?p=custom_assistant&id=none\">\n <svg class=\"icon-sm edit-icon\" aria-hidden=\"true\">\n <use href=\"#icon-pencil\" />\n </svg>\n </a>\n \n</div>\n\n\n<hr />\n<ul>\n \n <li>\n <h4>\n Cost\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Expensive\">\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n </div>\n</span>\n </li>\n <li>\n <h4>\n Quality\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Rated 5 out of 5\">\n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n </div>\n</span>\n </li>\n <li>\n <h4>\n Speed\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Rated 2 out of 5\">\n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n </div>\n</span>\n </li>\n <li>\n <h4>\n Privacy\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Rated 4 out of 5\">\n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n </div>\n</span>\n </li>\n <hr />\n <li>\n <h4>Last updated</h4>\n <span class=\"date\">\n \n \n 2025-12-09\n \n \n </span>\n </li>\n \n</ul>\n\n\n","model_provider_name":"Kagi","internet_access":true,"personalizations":true,"shortcut":null,"is_default_profile":false},"tag_ids":[],"read_only":true}
[{"id":"460dcf7e-8f75-4d01-9b4f-1b11badd0fb2","thread_id":"d9edbdbc-8084-423b-b03a-4242bbbfff22","created_at":"2025-12-31T12:24:25Z","state":"done","prompt":"Generate a comprehensive, chronological list of every notable LLM, Generative AI model, and agentic framework release that occurred between January 1, 2025, and December 31, 2025.\n\n## Formatting Constraints (Strict):\n\n- Breadth over Depth: Capture as many releases as possible (OpenAI, Google, Anthropic, Meta, Mistral, xAI, DeepSeek, Alibaba, etc.).\n\n- Sentence-Length Descriptions: Each entry must be exactly one sentence describing the core innovation or release (e.g., \"Meta released Llama 4, featuring a 100k native context window and improved reasoning benchmarks.\").\n\n- Chronological Order: Organize by Month (January through December).\n\n- No Paragraphs: Use a clean bulleted list format.\n\nScope: Include flagship LLMs, specialized coding models (like Claude Code), video generators (Sora 2, Veo 3), image models (Nano Banana), and agentic \"computer use\" updates.\n\nBegin with: \"# The 2025 Generative AI Release Timeline\"","references_html":"<ol data-ref-list>\n \n <li>\n <a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence\" >2025 in artificial intelligence - Wikipedia</a>\n <span class=\"__domain-name\">en.wikipedia.org</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 18%, transparent 18%)\">18%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc\" >2018–2025 Timeline Revealed! | by Generative AI | Medium</a>\n <span class=\"__domain-name\">medium.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 6%, transparent 6%)\">6%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence\" >History of artificial intelligence - Wikipedia</a>\n <span class=\"__domain-name\">en.wikipedia.org</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 6%, transparent 6%)\">6%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3\" >The Latest Generative AI Models in 2025: A Comprehensive Guide</a>\n <span class=\"__domain-name\">medium.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 5%, transparent 5%)\">5%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://lifearchitect.ai/models/\" >Inside language models (from GPT to Nova) – Dr Alan D. Thompson...</a>\n <span class=\"__domain-name\">lifearchitect.ai</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 5%, transparent 5%)\">5%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://timeline.the-blueprint.ai/\" >Generative AI Timeline | The Blueprint</a>\n <span class=\"__domain-name\">timeline.the-blueprint.ai</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 4%, transparent 4%)\">4%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://lifearchitect.ai/timeline/\" >Timeline of AI and language models – Dr Alan D. Thompson</a>\n <span class=\"__domain-name\">lifearchitect.ai</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 3%, transparent 3%)\">3%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html\" >French AI lab Mistral releases new AI models as it looks to keep pace ...</a>\n <span class=\"__domain-name\">www.cnbc.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 3%, transparent 3%)\">3%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://lifearchitect.ai/agi/\" >Alan’s conservative countdown to AGI – Dr Alan D. Thompson...</a>\n <span class=\"__domain-name\">lifearchitect.ai</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 3%, transparent 3%)\">3%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://llm-stats.com/\" >AI Leaderboards 2025 - Compare LLM, TTS, STT, Video ...</a>\n <span class=\"__domain-name\">llm-stats.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 3%, transparent 3%)\">3%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models\" >List of large language models</a>\n <span class=\"__domain-name\">en.wikipedia.org</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 3%, transparent 3%)\">3%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/\" >New Models From OpenAI, Anthropic, Google – All At The Same Time - Forbes</a>\n <span class=\"__domain-name\">www.forbes.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 3%, transparent 3%)\">3%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/\" >Meta Shares Hard Numbers For Llama’s Enterprise AI Value</a>\n <span class=\"__domain-name\">www.forbes.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 3%, transparent 3%)\">3%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://aiflashreport.com/model-releases.html\" >AI Model Release Timeline - AI Flash Report</a>\n <span class=\"__domain-name\">aiflashreport.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations\" >AI Timeline | Innovations and Advancements</a>\n <span class=\"__domain-name\">www.qualcomm.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.shakudo.io/blog/top-9-large-language-models\" >Top 9 Large Language Models as of December 2025</a>\n <span class=\"__domain-name\">www.shakudo.io</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://llm-stats.com/llm-updates\" >LLM Updates 2025 | AI Updates & Model Version Changelog</a>\n <span class=\"__domain-name\">llm-stats.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence\" >Generative artificial intelligence</a>\n <span class=\"__domain-name\">en.wikipedia.org</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7\" >The Top 5 AI Models of 2025: What’s New and How to Use... | Medium</a>\n <span class=\"__domain-name\">medium.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race\" >China's Alibaba Rolls Out Latest Flagship Qwen AI Model... - Bloomberg</a>\n <span class=\"__domain-name\">www.bloomberg.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model\" >Mistral awed by DeepSeek’s AI model - tribune.net.ph</a>\n <span class=\"__domain-name\">tribune.net.ph</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/\" >A Brief History of Generative AI</a>\n <span class=\"__domain-name\">www.dataversity.net</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc\" >LLM Frameworks You Can’t Ignore in 2025 (Apple, Meta, Google...)</a>\n <span class=\"__domain-name\">medium.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/\" >Alibaba releases AI model it says surpasses DeepSeek</a>\n <span class=\"__domain-name\">www.reuters.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/\" >2025 will be a huge year for the field of artificial intelligence.</a>\n <span class=\"__domain-name\">www.forbes.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.aibase.com/ai-timeline/2025\" >2025 Artificial Intelligence (AI) Events Timeline</a>\n <span class=\"__domain-name\">www.aibase.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 2%, transparent 2%)\">2%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.cloudzero.com/blog/top-ai-models/\" >The Top AI Models And Trends Shaping SaaS in 2025</a>\n <span class=\"__domain-name\">www.cloudzero.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.bloomberg.com/news/articles/2025-05-07/france-s-mistral-releases-ai-model-tailored-for-businesses\" >France’s Mistral Releases AI Model Tailored for... - Bloomberg</a>\n <span class=\"__domain-name\">www.bloomberg.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/\" >Evolving LLM Market: Anthropic Leads 2025 Enterprise ...</a>\n <span class=\"__domain-name\">www.aicerts.ai</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.linkedin.com/pulse/generative-ai-2025-timeline-mar%C3%ADa-jos%C3%A9-pel%C3%A1ez-montalvo-0isbe\" >Generative AI in 2025: A Timeline - LinkedIn</a>\n <span class=\"__domain-name\">www.linkedin.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence\" >Timeline of artificial intelligence - Wikipedia</a>\n <span class=\"__domain-name\">en.wikipedia.org</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://medium.com/@Micheal-Lanham/the-october-2025-llm-revolution-what-you-need-to-know-7828c222c9d2\" >The October 2025 LLM Revolution: What You Need to Know | Medium</a>\n <span class=\"__domain-name\">medium.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.straitstimes.com/asia/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3\" >Alibaba releases AI model it claims surpasses DeepSeek-V3</a>\n <span class=\"__domain-name\">www.straitstimes.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://aitechcentre.com/openai-vs-anthropic-claude-google-gemini-and-mistral/\" >OpenAI vs. Anthropic Claude, Google Gemini, and Mistral</a>\n <span class=\"__domain-name\">aitechcentre.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n <li>\n <a href=\"https://www.timesofai.com/news/roundup-of-ai-model-releases-in-2025/\" >AI Model Releases in 2025: The Roundup of AI Launches</a>\n <span class=\"__domain-name\">www.timesofai.com</span>\n <div class=\"trigger\" tabindex=\"0\">\n <span class=\"contribution\" style=\"background: linear-gradient(to right, var(--primary-70) 1%, transparent 1%)\">1%</span>\n <div class=\"tooltip top end tooltip_base tw-auto\">\n This percentage indicates how much a source has contributed to the answer.\n </div>\n </div>\n </li>\n \n</ol>\n","references_md":"[^1]: [2025 in artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/2025_in_artificial_intelligence) (18%)\n[^2]: [2018–2025 Timeline Revealed! | by Generative AI | Medium](https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc) (6%)\n[^3]: [History of artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/History_of_artificial_intelligence) (6%)\n[^4]: [The Latest Generative AI Models in 2025: A Comprehensive Guide](https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3) (5%)\n[^5]: [Inside language models (from GPT to Nova) – Dr Alan D. Thompson...](https://lifearchitect.ai/models/) (5%)\n[^6]: [Generative AI Timeline | The Blueprint](https://timeline.the-blueprint.ai/) (4%)\n[^7]: [Timeline of AI and language models – Dr Alan D. Thompson](https://lifearchitect.ai/timeline/) (3%)\n[^8]: [French AI lab Mistral releases new AI models as it looks to keep pace ...](https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html) (3%)\n[^9]: [Alan’s conservative countdown to AGI – Dr Alan D. Thompson...](https://lifearchitect.ai/agi/) (3%)\n[^10]: [AI Leaderboards 2025 - Compare LLM, TTS, STT, Video ...](https://llm-stats.com/) (3%)\n[^11]: [List of large language models](https://en.wikipedia.org/wiki/List_of_large_language_models) (3%)\n[^12]: [New Models From OpenAI, Anthropic, Google – All At The Same Time - Forbes](https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/) (3%)\n[^13]: [Meta Shares Hard Numbers For Llama’s Enterprise AI Value](https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/) (3%)\n[^14]: [AI Model Release Timeline - AI Flash Report](https://aiflashreport.com/model-releases.html) (2%)\n[^15]: [AI Timeline | Innovations and Advancements](https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations) (2%)\n[^16]: [Top 9 Large Language Models as of December 2025](https://www.shakudo.io/blog/top-9-large-language-models) (2%)\n[^17]: [LLM Updates 2025 | AI Updates & Model Version Changelog](https://llm-stats.com/llm-updates) (2%)\n[^18]: [Generative artificial intelligence](https://en.wikipedia.org/wiki/Generative_artificial_intelligence) (2%)\n[^19]: [The Top 5 AI Models of 2025: What’s New and How to Use... | Medium](https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7) (2%)\n[^20]: [China's Alibaba Rolls Out Latest Flagship Qwen AI Model... - Bloomberg](https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race) (2%)\n[^21]: [Mistral awed by DeepSeek’s AI model - tribune.net.ph](https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model) (2%)\n[^22]: [A Brief History of Generative AI](https://www.dataversity.net/articles/a-brief-history-of-generative-ai/) (2%)\n[^23]: [LLM Frameworks You Can’t Ignore in 2025 (Apple, Meta, Google...)](https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc) (2%)\n[^24]: [Alibaba releases AI model it says surpasses DeepSeek](https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/) (2%)\n[^25]: [2025 will be a huge year for the field of artificial intelligence.](https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/) (2%)\n[^26]: [2025 Artificial Intelligence (AI) Events Timeline](https://www.aibase.com/ai-timeline/2025) (2%)\n[^27]: [The Top AI Models And Trends Shaping SaaS in 2025](https://www.cloudzero.com/blog/top-ai-models/) (1%)\n[^28]: [France’s Mistral Releases AI Model Tailored for... - Bloomberg](https://www.bloomberg.com/news/articles/2025-05-07/france-s-mistral-releases-ai-model-tailored-for-businesses) (1%)\n[^29]: [Evolving LLM Market: Anthropic Leads 2025 Enterprise ...](https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/) (1%)\n[^30]: [Generative AI in 2025: A Timeline - LinkedIn](https://www.linkedin.com/pulse/generative-ai-2025-timeline-mar%C3%ADa-jos%C3%A9-pel%C3%A1ez-montalvo-0isbe) (1%)\n[^31]: [Timeline of artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence) (1%)\n[^32]: [The October 2025 LLM Revolution: What You Need to Know | Medium](https://medium.com/@Micheal-Lanham/the-october-2025-llm-revolution-what-you-need-to-know-7828c222c9d2) (1%)\n[^33]: [Alibaba releases AI model it claims surpasses DeepSeek-V3](https://www.straitstimes.com/asia/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3) (1%)\n[^34]: [OpenAI vs. Anthropic Claude, Google Gemini, and Mistral](https://aitechcentre.com/openai-vs-anthropic-claude-google-gemini-and-mistral/) (1%)\n[^35]: [AI Model Releases in 2025: The Roundup of AI Launches](https://www.timesofai.com/news/roundup-of-ai-model-releases-in-2025/) (1%)\n","reply":"<details><summary>Planned research</summary>\n<ul>\n<li>Search for comprehensive AI model release timelines for 2025</li>\n<li>Search by major company names (OpenAI, Google, Anthropic, Meta, Mistral, xAI, DeepSeek, Alibaba) + 2025 releases</li>\n<li>Search for specific model categories: video AI (Sora, Veo), image AI, coding AI, agentic AI</li>\n<li>Search by quarters/months to ensure chronological coverage</li>\n<li>Cross-reference and compile into chronological list</li>\n<li>Verify key dates and descriptions through multiple sources</li>\n</ul>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>AI model releases 2025 timeline comprehensive list</li>\n<li>LLM releases 2025 OpenAI Google Anthropic Meta</li>\n<li>generative AI launches 2025 chronological</li>\n<li>DeepSeek Alibaba Mistral xAI 2025 model releases</li></ul></summary><br>\nSources (40):<br>\n<strong>Timeline of AI and language models – Dr Alan D. Thompson</strong><br>\n<a href=\"https://lifearchitect.ai/timeline/\" target=\"_blank\">https://lifearchitect.ai/timeline/</a><br>\nJul 20, 2021 Time between releases of OpenAI’s GPT models. Model. Months since last release.Living documents Alan’s conservative countdown to AGI Alan’s ASI checklist Alan’s technical highlights (new tab) ALPrompt Calculations and conversions Compute Table (new tab) Datasets Table LLM-Reset/LLM-Normalize Models Table Timeline of AI and language models Visualizations The Who Moved My Cheese?\n\n<p><strong>Inside language models (from GPT to Nova) – Dr Alan D. Thompson...</strong><br>\n<a href=\"https://lifearchitect.ai/models/\" target=\"_blank\">https://lifearchitect.ai/models/</a><br>\nJul 13, 2021 Language model sizes Summary of current models Count of LLMs released per month (2024) Compute Context windows Achievements unlocked: Emergent abilities of LLMs Large language models: API or on-premise Increasing dataset sizes 2018-2025 GPT-3’s top 10 datasets by domain/source Contents of GPT-3 & the Pile v1 Contents of Chinese models Language model sizes & predictions [...]What’s in my AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, MT-NLG, and Gopher.</p>\n<p><strong>Alan’s conservative countdown to AGI – Dr Alan D. Thompson...</strong><br>\n<a href=\"https://lifearchitect.ai/agi/\" target=\"_blank\">https://lifearchitect.ai/agi/</a><br>\nJun 23, 2022 Job listing, archive. Apr/2025. INFO: OpenAI Memory: ‘ChatGPT will reference your past conversations to recall useful information you’ve shared. It uses this to learn about your interests and preferences, helping make future chats more personalized and relevant.’ and ‘You can also teach ChatGPT something new by saying it in a chat‘, however, ‘it doesn’t retain every detail.’Timeline of AI and language models Timeline of AI and language models. AI + IQ testing (human vs AI) AI + IQ testing (human vs AI).</p>\n<p><strong>AI Model Release Timeline - AI Flash Report</strong><br>\n<a href=\"https://aiflashreport.com/model-releases.html\" target=\"_blank\">https://aiflashreport.com/model-releases.html</a><br>\n Track the latest AI model releases: GPT-4, Claude, Mistral, and more. Comprehensive timeline with specs, performance metrics, and availability.</p>\n<p><strong>The Latest Generative AI Models in 2025: A Comprehensive Guide</strong><br>\n<a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3\" target=\"_blank\">https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3</a><br>\nMay 11, 2025 What started as text-based language models has now exploded into sophisticated multimodal systems that understand and generate text, images, audio, and video, often in real-time. In this blog post, I bring you the most up-to-date and comprehensive roundup of generative AI models released in 2025, across companies like OpenAI, Google DeepMind, Anthropic, Alibaba, and more.As of May 2025, the evolution of generative AI models shows some clear trends: Reasoning Before Responding: Claude 3.7 and Gemini 2.5 use internal “thought loops” or reflection to improve correctness.</p>\n<p><strong>The Top 5 AI Models of 2025: What’s New and How to Use... | Medium</strong><br>\n<a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7\" target=\"_blank\">https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7</a><br>\nMar 24, 2025 Each of these top 5 AI models expected in 2025 brings a unique set of strengths to the table. GPT-4.5 aims to be the all-encompassing intelligent system, unifying various AI capabilities for ease of use and broader applications. Grok 3 stands out for its advanced reasoning and real-time data access through X integration, making it a powerful tool for research and technical tasks.</p>\n<p><strong>2025 in artificial intelligence</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence\" target=\"_blank\">https://en.wikipedia.org/wiki/2025_in_artificial_intelligence</a><br>\nFeb 02, 2025 The following is a list of events of the year 2025 in artificial intelligence, as well as predicted and scheduled events that have not yet occurred.January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.[1] DeepSeek-R1 is open-source.</p>\n<p><strong>The Evolution of Generative AI: 2018–2025 Timeline... | Medium</strong><br>\n<a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc\" target=\"_blank\">https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc</a><br>\nMar 17, 2025</p>\n<p><strong>Timeline of artificial intelligence - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence\" target=\"_blank\">https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence</a><br>\nAug 04, 2013 The training computation of notable AI systems through time.2018. Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.[131]. The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, to stave off a brain drain of talent, along the lines of CERN after World War II.[132].</p>\n<p><strong>History of artificial intelligence - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence\" target=\"_blank\">https://en.wikipedia.org/wiki/History_of_artificial_intelligence</a><br>\n The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT.</p>\n<p><strong>Evolving LLM Market: Anthropic Leads 2025 Enterprise ...</strong><br>\n<a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/\" target=\"_blank\">https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/</a><br>\nDec 30, 2025 See how the LLM Market shifts as Anthropic tops enterprise share, budgets rise, and experts deliver actionable guidance for savvy tech ...</p>\n<p><strong>New Models From OpenAI, Anthropic, Google – All At The Same Time - Forbes</strong><br>\n<a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/\" target=\"_blank\">https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/</a><br>\nAug 06, 2025 It’s Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today’s news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I’ll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models.</p>\n<p><strong>2025 will be a huge year for the field of artificial intelligence.</strong><br>\n<a href=\"https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/\" target=\"_blank\">https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/</a><br>\nDec 23, 2024 Why would Meta make this strategic pivot? Keeping up with the LLM frontier is incredibly expensive. Meta will need to invest many billions of dollars every year if it wants Llama to stay at or near parity with the latest frontier models from OpenAI, Anthropic and others. Meta is one of the world’s largest and most deep-pocketed companies. But it is also a publicly traded company that is ultimately answerable to its shareholders.</p>\n<p><strong>Meta Shares Hard Numbers For Llama’s Enterprise AI Value</strong><br>\n<a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/\" target=\"_blank\">https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/</a><br>\nAug 30, 2024 Closed LLMs include OpenAI, Google Gemini and Anthropic. The most pervasive open LLM is the Llama model from Meta.Click through to the Meta release if you want to read glowing reviews from Jensen Huang of Nvidia and other partner executives from AWS, Databricks and Groq. The long list of Llama partners also includes Dell Technologies, IBM, Scale AI and Snowflake, among others.</p>\n<p><strong>The October 2025 LLM Revolution: What You Need to Know | Medium</strong><br>\n<a href=\"https://medium.com/@Micheal-Lanham/the-october-2025-llm-revolution-what-you-need-to-know-7828c222c9d2\" target=\"_blank\">https://medium.com/@Micheal-Lanham/the-october-2025-llm-revolution-what-you-need-to-know-7828c222c9d2</a><br>\nNov 28, 2025 Anthropic even released a Claude Agent SDK specifically for building complex AI agents with this model.Sources: Information based on official announcements from Anthropic, OpenAI, IBM Research, Meta, Baidu, Liquid AI, and Oracle as of October 2025. All model specifications and benchmarks cited from original sources.</p>\n<p><strong>LLM Frameworks You Can’t Ignore in 2025 (Apple, Meta, Google...)</strong><br>\n<a href=\"https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc\" target=\"_blank\">https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc</a><br>\nJul 01, 2025 The AI space isn’t just moving fast in 2025 — it’s warping forward like a bullet train with no brakes. LLMs (Large Language Models) are evolving rapidly, but what’s truly game-changing this year isn’t just the models — it’s the frameworks that power them. From on-device optimization to real-time agent orchestration, these frameworks are redefining how we build, scale, and deploy intelligence. And the surprising part? It’s not just OpenAI or Anthropic anymore.</p>\n<p><strong>LLM Updates 2025 | AI Updates & Model Version Changelog</strong><br>\n<a href=\"https://llm-stats.com/llm-updates\" target=\"_blank\">https://llm-stats.com/llm-updates</a><br>\n Latest version releases, API changes, and model improvements from OpenAI, Anthropic, Google, and Meta. Complete changelog for GPT, Claude, Gemini, and Llama model versions.</p>\n<p><strong>AI Leaderboards 2025 - Compare LLM, TTS, STT, Video ...</strong><br>\n<a href=\"https://llm-stats.com/\" target=\"_blank\">https://llm-stats.com/</a><br>\n Comprehensive AI leaderboards comparing LLM, text-to-speech, speech-to-text, video generation, image generation, and embedding models. Compare performance ...</p>\n<p><strong>OpenAI vs. Anthropic Claude, Google Gemini, and Mistral</strong><br>\n<a href=\"https://aitechcentre.com/openai-vs-anthropic-claude-google-gemini-and-mistral/\" target=\"_blank\">https://aitechcentre.com/openai-vs-anthropic-claude-google-gemini-and-mistral/</a><br>\n Competitors like Meta (with Llama 3) and others also play into industry dynamics, but in 2025 the quartet of OpenAI, Anthropic, Google, and open-model startups like Mistral define much of the AI narrative.</p>\n<p><strong>List of large language models</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models\" target=\"_blank\">https://en.wikipedia.org/wiki/List_of_large_language_models</a><br>\n List ; Llama 4, April 5, 2025, Meta AI ; OpenAI o3 and o4-mini, April 16, 2025, OpenAI ; Qwen3, April 2025, Alibaba Cloud ; Claude 4, May 22, 2025, Anthropic ...</p>\n<p><strong>2018–2025 Timeline Revealed! | by Generative AI | Medium</strong><br>\n<a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc\" target=\"_blank\">https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc</a><br>\n Artificial Intelligence has seen groundbreaking advancements over the years, with Large Language Models (LLMs) evolving through post-training techniques that enhance their capabilities, alignment,...</p>\n<p><strong>Generative AI Timeline | The Blueprint</strong><br>\n<a href=\"https://timeline.the-blueprint.ai/\" target=\"_blank\">https://timeline.the-blueprint.ai/</a><br>\n 28 October 2025. Adobe release Firefly Image 5 ; 21 October 2025. OpenAI introduce ChatGPT Atlas browser ; 16 October 2025. Anthropic introduce Agent Skills ; 15 ...</p>\n<p><strong>AI Model Releases in 2025: The Roundup of AI Launches</strong><br>\n<a href=\"https://www.timesofai.com/news/roundup-of-ai-model-releases-in-2025/\" target=\"_blank\">https://www.timesofai.com/news/roundup-of-ai-model-releases-in-2025/</a><br>\n The AI model roundup of 2025 clearly indicates the transition from singular breakthroughs to an intelligent, integrated, and ready-for-production era. The multimodal reasoning, specialized coding models, open ecosystems, and 3D understanding are all advancing together.</p>\n<p><strong>A Brief History of Generative AI</strong><br>\n<a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/\" target=\"_blank\">https://www.dataversity.net/articles/a-brief-history-of-generative-ai/</a><br>\nSep 10, 2025 The years 2023 through 2025 represent one of the most rapid periods of advancement in the history of generative AI. During this time, generative ...</p>\n<p><strong>2025 in artificial intelligence - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence\" target=\"_blank\">https://en.wikipedia.org/wiki/2025_in_artificial_intelligence</a><br>\n The following is a list of events of the year 2025 in artificial intelligence, as well as predicted and scheduled events that have not yet occurred.</p>\n<p><strong>2025 Artificial Intelligence (AI) Events Timeline</strong><br>\n<a href=\"https://www.aibase.com/ai-timeline/2025\" target=\"_blank\">https://www.aibase.com/ai-timeline/2025</a><br>\n Explore the comprehensive timeline of 2025's artificial intelligence milestones, including major breakthroughs, product launches, and industry developments. Stay updated with the most authoritative AI chronicle.</p>\n<p><strong>Generative artificial intelligence</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence\" target=\"_blank\">https://en.wikipedia.org/wiki/Generative_artificial_intelligence</a><br>\n By mid 2025, despite continued consumer growth, many companies were increasingly abandoning generative AI pilot projects as they had difficulties with ...</p>\n<p><strong>The History of AI: A Timeline of Artificial Intelligence</strong><br>\n<a href=\"https://www.coursera.org/articles/history-of-ai\" target=\"_blank\">https://www.coursera.org/articles/history-of-ai</a><br>\nOct 15, 2025 AI has a long history stretching back to the 1950s, with significant milestones at nearly every decade. In this article, we'll review some of the major events ...</p>\n<p><strong>Generative AI in 2025: A Timeline - LinkedIn</strong><br>\n<a href=\"https://www.linkedin.com/pulse/generative-ai-2025-timeline-mar%C3%ADa-jos%C3%A9-pel%C3%A1ez-montalvo-0isbe\" target=\"_blank\">https://www.linkedin.com/pulse/generative-ai-2025-timeline-mar%C3%ADa-jos%C3%A9-pel%C3%A1ez-montalvo-0isbe</a><br>\n Over the course of 2025, the conversation around generative AI did not remain stable — its focus, actors, and tone shifted over time. This visualization brings together 745 news items and lets ...</p>\n<p><strong>AI Timeline | Innovations and Advancements</strong><br>\n<a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations\" target=\"_blank\">https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations</a><br>\nFeb 12, 2024 See how generative AI has evolved, from early attempts to cutting-edge transformers. Uncover the advancements that revolutionized AI.</p>\n<p><strong>Top 9 Large Language Models as of December 2025</strong><br>\n<a href=\"https://www.shakudo.io/blog/top-9-large-language-models\" target=\"_blank\">https://www.shakudo.io/blog/top-9-large-language-models</a><br>\n As of late 2024 and mid-2025, DeepSeek has been actively releasing and updating its models, including the DeepSeek V3. ... Mistral AI, a prominent player ...</p>\n<p><strong>Alibaba releases AI model it says surpasses DeepSeek</strong><br>\n<a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/\" target=\"_blank\">https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/</a><br>\nJan 28, 2025 BEIJING, Jan 29 (Reuters) - Chinese tech company Alibaba (9988.HK) on Wednesday released a new version of its Qwen 2.5 artificial intelligence model that it claimed surpassed the...</p>\n<p><strong>Alibaba releases AI model it claims surpasses DeepSeek-V3</strong><br>\n<a href=\"https://www.straitstimes.com/asia/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3\" target=\"_blank\">https://www.straitstimes.com/asia/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3</a><br>\n BEIJING - Chinese tech company Alibaba on Jan 29 released a new version of its Qwen 2.5 artificial intelligence (AI) model that it claimed surpassed the highly acclaimed DeepSeek-V3.</p>\n<p><strong>The Top AI Models And Trends Shaping SaaS in 2025</strong><br>\n<a href=\"https://www.cloudzero.com/blog/top-ai-models/\" target=\"_blank\">https://www.cloudzero.com/blog/top-ai-models/</a><br>\nAug 19, 2025 Open-weight and open-source models. LLaMA 4, Mistral, and DeepSeek's open releases let you run models on your own infrastructure, tweak them ...</p>\n<p><strong>French AI lab Mistral releases new AI models as it looks to keep pace ...</strong><br>\n<a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html\" target=\"_blank\">https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html</a><br>\nDec 02, 2025 Artificial intelligence startup Mistral released a new suite of models Tuesday as it looks to keep pace with leading AI labs Google, OpenAI and DeepSeek.</p>\n<p><strong>Alibaba Unveils Upgraded AI Model, Claims It Surpasses Rival DeepSeek ...</strong><br>\n<a href=\"https://www.morningstar.com/news/dow-jones/20250129971/alibaba-unveils-upgraded-ai-model-claims-it-surpasses-rival-deepseek-v3\" target=\"_blank\">https://www.morningstar.com/news/dow-jones/20250129971/alibaba-unveils-upgraded-ai-model-claims-it-surpasses-rival-deepseek-v3</a><br>\n Chinese e-commerce giant Alibaba released a new version of its artificial intelligence model, claiming it claims surpasses DeepSeek's AI model across various benchmarks.</p>\n<p><strong>France’s Mistral Releases AI Model Tailored for... - Bloomberg</strong><br>\n<a href=\"https://www.bloomberg.com/news/articles/2025-05-07/france-s-mistral-releases-ai-model-tailored-for-businesses\" target=\"_blank\">https://www.bloomberg.com/news/articles/2025-05-07/france-s-mistral-releases-ai-model-tailored-for-businesses</a><br>\nMay 07, 2025 Mistral is releasing a new artificial intelligence model that’s tailored for business use cases, as the French startup tries to capitalize on Europe’s concerns about being overly reliant on technology from Silicon Valley.Mistral, Europe’s leading AI developer, was last valued at €5.8 billion ($6.6 billion) in June 2024, but the company faces mounting competition. Meta Platforms Inc. recently announced a family of models, responding to a flurry of releases out of China from DeepSeek, Alibaba Group Holding Ltd. The AI landscape is increasingly seen as an expensive, two-country race .</p>\n<p><strong>Alibaba Touts New AI Model Superior to DeepSeek’s and... - Bloomberg</strong><br>\n<a href=\"https://www.bloomberg.com/news/articles/2025-01-29/alibaba-touts-new-ai-model-superior-to-deepseek-s-and-meta-s\" target=\"_blank\">https://www.bloomberg.com/news/articles/2025-01-29/alibaba-touts-new-ai-model-superior-to-deepseek-s-and-meta-s</a><br>\nJan 29, 2025 DeepSeek, a 20-month-old startup that was founded in Alibaba’s home city, Hangzhou, became a global sensation this week and figures prominently as the first benchmark that Alibaba appears to now measure itself against. Alibaba Cloud also shared scores that suggest its AI beats OpenAI and Anthropic’s models in certain benchmarks. Take the MLIV Pulse Survey. Does DeepSeek’s rise mark the end of the Magnificent Seven’s reign?</p>\n<p><strong>China's Alibaba Rolls Out Latest Flagship Qwen AI Model... - Bloomberg</strong><br>\n<a href=\"https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race\" target=\"_blank\">https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race</a><br>\nApr 29, 2025 Alibaba — which in 2025 declared itself all-in on the AI race — came out with a new model in its Qwen 2.5 series just a few weeks ago that can process text, pictures, audio and video — and is efficient enough to run directly on phones and laptops. It unveiled a new version of its AI assistant Quark app in March. Read More: China Floods the World With AI Models After DeepSeek Success. OpenAI , Google and Anthropic have similarly released a flurry of models in recent months.</p>\n<p><strong>Mistral awed by DeepSeek’s AI model - tribune.net.ph</strong><br>\n<a href=\"https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model\" target=\"_blank\">https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model</a><br>\nJan 31, 2025 French AI startup Mistral on Thursday hailed Chinese competitor DeepSeek’s R1 model as “great” for the fast-developing sector, while announcing another new release of its own.</p>\n</details>\n<details><summary>Gathered key details from:<p></p>\n<ul>\n<li><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence\" target=\"_blank\">2025 in artificial intelligence - Wikipedia</a></li></ul></summary>\n\n<h3>Query</h3>\n<p>Complete list of all AI model releases in 2025, organized by month, including LLMs, video models, image models, and agentic frameworks from all major companies</p>\n<h3>Response</h3>\n<ul>\n<li>January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. DeepSeek-R1 is open-source.</li>\n</ul>\n<p>Context: This chunk details the release of DeepSeek-R1, a large language model.</p>\n<hr>\n<ul>\n<li>February 3 – OpenAI releases ChatGPT Deep Research, an artificial intelligence system integrated into ChatGPT, which generates cited reports on a user-specified topic by autonomously browsing the web for 5 to 30 minutes.</li>\n</ul>\n<p>Context: This chunk describes the release of ChatGPT Deep Research, an AI system that generates cited reports.</p>\n<hr>\n<ul>\n<li>February 6 – Mistral AI releases Le Chat, an AI assistant able to answer up to 1,000 words per second.</li>\n</ul>\n<p>Context: This chunk mentions the release of Le Chat by Mistral AI, an AI assistant.</p>\n<hr>\n<ul>\n<li>February 27 – OpenAI announces a research preview of GPT-4.5, its largest and most advanced AI model to date.</li>\n</ul>\n<p>Context: This chunk announces the research preview of GPT-4.5 by OpenAI.</p>\n<hr>\n<p>31 March: OpenAI's latest model GPT-4.5 is reported to be indistinguishable from a human in text conversations.</p>\n<p>Context: This chunk reports on GPT-4.5's performance, stating it's indistinguishable from a human in text conversations.</p>\n<hr>\n<ul>\n<li>27 March – Engineers in the fields of electrical and computer science create a groundbreaking AI system known as \"Super-Turing AI,\" designed to function more similarly to the human brain. Unlike conventional models that isolate tasks and shuttle massive volumes of data between components, this innovative AI merges key processes, streamlining operations and enhancing efficiency.</li>\n</ul>\n<p>Context: This chunk describes the creation of a groundbreaking AI system known as \"Super-Turing AI\".</p>\n<hr>\n<ul>\n<li>16 April – OpenAI announces the launch of two new AI models, o3 and o4-mini.</li>\n</ul>\n<p>Context: This chunk announces the launch of two new AI models, o3 and o4-mini, by OpenAI.</p>\n<hr>\n<ul>\n<li>14 May – Google DeepMind announces AlphaEvolve, a Gemini-powered coding agent for designing advanced algorithms.</li>\n</ul>\n<p>Context: This chunk details the announcement of AlphaEvolve by Google DeepMind, a Gemini-powered coding agent.</p>\n<hr>\n<p>20 May: Google DeepMind sets a new benchmark for AI video generation with its Veo 3 model.</p>\n<p>Context: This chunk mentions Google DeepMind setting a new benchmark for AI video generation with its Veo 3 model.</p>\n<hr>\n<ul>\n<li>20 May\n<ul>\n<li>Google launches A.I. Mode, which will be a feature on their search engine, and uses the Gemini model.</li>\n<li>Google DeepMind announces Veo 3, a new state-of-the-art video generation model. The company also boosts the performance of Gemini 2.5 Pro, its flagship AI model.</li>\n</ul>\n</li>\n</ul>\n<p>Context: This chunk describes Google launching A.I. Mode and Google DeepMind announcing Veo 3 and boosting Gemini 2.5 Pro.</p>\n<hr>\n<ul>\n<li>22 May – Anthropic releases Claude 4, with two models: Claude Opus 4 and Claude Sonnet 4. According to Anthropic, Claude 4 can function on its own for hours.</li>\n</ul>\n<p>Context: This chunk details Anthropic releasing Claude 4, with two models: Claude Opus 4 and Claude Sonnet 4.</p>\n<hr>\n<ul>\n<li>7 August – OpenAI releases GPT-5.</li>\n</ul>\n<p>Context: This chunk states that OpenAI releases GPT-5.</p>\n<hr>\n<ul>\n<li>8 September - Artificial intelligence uses vast energy, but University of Florida researchers develops a chip that replaces electricity with light for key AI tasks. Using microscopic lenses etched onto silicon, it performs laser-powered computations with drastically lower energy and near-perfect accuracy.</li>\n</ul>\n<p>Context: This chunk describes a chip developed by University of Florida researchers that replaces electricity with light for key AI tasks, which could be considered an advancement in AI hardware or framework for AI tasks.</p>\n<hr>\n<p>Links from Source - Can be extracted on follow up librarian calls:</p>\n<p>Link: https://en.wikipedia.org/wiki/2025_in_artificial_intelligence<br>\nContext: This is the main page for '2025 in artificial intelligence' from Wikipedia.</p>\n</details>\n<details><summary>Gathering key details from..<p></p>\n<ul>\n<li><a href=\"https://timeline.the-blueprint.ai/\" target=\"_blank\">https://timeline.the-blueprint.ai/</a></li></ul></summary>\n\n</details>\n<details><summary>Gathered key details from:<p></p>\n<ul>\n<li><a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models\" target=\"_blank\">List of large language models - Wikipedia</a></li></ul></summary>\n\n<h3>Query</h3>\n<p>Complete list of large language models released in 2025 with release dates</p>\n<h3>Response</h3>\n<p>| DeepSeek-R1 | January 2025 | DeepSeek | 671 | Not applicable | Unknown | MIT | No pretraining. Reinforcement-learned upon V3-Base. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Qwen2.5 | January 2025 | Alibaba | 72 | 18T tokens | Unknown | Qwen License | 7 dense models, with parameter count from 0.5B to 72B. They also released 2 MoE variants. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| MiniMax-Text-01 | January 2025 | Minimax | 456 | 4.7T tokens | Unknown | Minimax Model license | |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Gemini 2.0 | February 2025 | Google DeepMind | Unknown | Unknown | Unknown | Proprietary | Three models released: Flash, Flash-Lite and Pro |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Claude 3.7 | February 24, 2025 | Anthropic | Unknown | Unknown | Unknown | Proprietary | One model, Sonnet 3.7. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| YandexGPT 5 Lite Pretrain and Pro | February 25, 2025 | Yandex | Unknown | Unknown | Unknown | Proprietary | Used in Alice Neural Network chatbot. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| GPT-4.5 | February 27, 2025 | OpenAI | Unknown | Unknown | Unknown | Proprietary | Largest non-reasoning model. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Grok 3 | February 2025 | xAI | Unknown | Unknown | Unknown,estimated 5,800,000 | Proprietary | Training cost claimed \"10x the compute of previous state-of-the-art models\". |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Gemini 2.5 | March 25, 2025 | Google DeepMind | Unknown | Unknown | Unknown | Proprietary | Three models released: Flash, Flash-Lite and Pro |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| YandexGPT 5 Lite Instruct | March 31, 2025 | Yandex | Unknown | Unknown | Unknown | Proprietary | Used in Alice Neural Network chatbot. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Llama 4 | April 5, 2025 | Meta AI | 400 | 40T tokens | Unknown | Llama 4 license | |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| OpenAI o3 and o4-mini | April 16, 2025 | OpenAI | Unknown | Unknown | Unknown | Proprietary | Reasoning models. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Qwen3 | April 2025 | Alibaba Cloud | 235 | 36T tokens | Unknown | Apache 2.0 | Multiple sizes, the smallest being 0.6B. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Claude 4 | May 22, 2025 | Anthropic | Unknown | Unknown | Unknown | Proprietary | Includes two models, Sonnet and Opus. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Grok 4 | July 9, 2025 | xAI | Unknown | Unknown | Unknown | Proprietary | |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| GLM-4.5 | July 29, 2025 | Zhipu AI | 355 | 22T tokens | Unknown | MIT | Released in 335B and 106B sizes. Corpus size was calculated by combining the 15 trillion tokens and the 7 trillion tokens pre-training mix. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| GPT-OSS | August 5, 2025 | OpenAI | 117 | Unknown | Unknown | Apache 2.0 | Released in 20B and 120B sizes. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Claude 4.1 | August 5, 2025 | Anthropic | Unknown | Unknown | Unknown | Proprietary | Includes one model, Opus. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| GPT-5 | August 7, 2025 | OpenAI | Unknown | Unknown | Unknown | Proprietary | Includes three models, GPT-5, GPT-5 mini, and GPT-5 nano. GPT-5 is available in ChatGPT and API. It includes thinking abilities. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| DeepSeek-V3.1 | August 21, 2025 | DeepSeek | 671 | 15.639T | | MIT | Training size: 14.8T tokens, of DeepSeek V3 plus 839B tokens from the extension phases (630B + 209B)It is a hybrid model that can switch between thinking and non-thinking modes. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| YandexGPT 5.1 Pro | August 28, 2025 | Yandex | Unknown | Unknown | Unknown | Proprietary | Used in Alice Neural Network chatbot. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Apertus | September 2, 2025 | ETH Zurich and EPF Lausanne | 70 | 15 trillion | Unknown | Apache 2.0 | It's said to be the first LLM to be compliant with EU's Artificial Intelligence Act. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Claude 4.5 | September 29, 2025 | Anthropic | Unknown | Unknown | Unknown | Proprietary | Only one variant is available, Sonnet. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| DeepSeek-V3.2-Exp | September 29, 2025 | DeepSeek | 685 | | | MIT | This experimental model built upon v3.1-Terminus uses a custom efficient mechanism tagged DeepSeek Sparse Attention (DSA). |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| GLM-4.6 | September 30, 2025 | Zhipu AI | 357 | | | Apache 2.0 | |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| Alice AI LLM 1.0 | October 28, 2025 | Yandex | Unknown | Unknown | Unknown | Proprietary | Available in Alice AI chatbot. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n<hr>\n<p>| GPT 5.2 | December 11, 2025 | OpenAI | Unknown | Unknown | Unknown,estimates $1.25 billion to $2.5 billion, | proprietary | It was able to solve an open problem in statistical learning theory that had previously remained unresolved by human researchers. |</p>\n<p>Context: The following are large language models released in 2025 with their release dates.</p>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>Claude Code Anthropic 2025 release date</li>\n<li>Sora 2 OpenAI 2025 release</li>\n<li>Google Veo 3 2025 release</li>\n<li>xAI Grok 3 February 2025</li>\n<li>Meta Llama 4 April 2025 release</li></ul></summary><br>\nSources (50):<br>\n<strong>Claude Developer Platform - Claude Docs</strong><br>\n<a href=\"https://platform.claude.com/docs/en/release-notes/overview\" target=\"_blank\">https://platform.claude.com/docs/en/release-notes/overview</a><br>\nThe console will be available at both URLs until December 16, 2025. After that date, console.anthropic.com will automatically redirect to platform.claude.com.\n\n<p><strong>Claude Code - AI Wiki - Artificial Intelligence Wiki</strong><br>\n<a href=\"https://aiwiki.ai/wiki/Claude_Code\" target=\"_blank\">https://aiwiki.ai/wiki/Claude_Code</a><br>\n On September 29, 2025, Anthropic released Claude Code version 2.0 with significant enhancements including checkpoints for safe rollbacks, a native VS Code extension, terminal v2.0 UX improvements, and the renamed Claude Agent SDK (formerly Claude Code SDK).</p>\n<p><strong>Claude Sonnet 4.5</strong><br>\n<a href=\"https://www.anthropic.com/claude/sonnet\" target=\"_blank\">https://www.anthropic.com/claude/sonnet</a><br>\n Claude Sonnet 3.7 and Claude Code. Feb 24, 2025. Sonnet 3.7 is the first hybrid reasoning model and our most intelligent model to date. It's state-of-the art ...</p>\n<p><strong>Code with Claude 2025 \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/events/code-with-claude-2025\" target=\"_blank\">https://www.anthropic.com/events/code-with-claude-2025</a><br>\nMay 30, 2025 Code with Claude is a hands-on, one-day event focused on exploring real-world implementations and best practices using the Anthropic API, CLI tools, and Model Context Protocol (MCP).Thank you for your interest in Code with Claude 2025. Applications for this event are now closed. We invite you to tune into the livestream on Jul 15, 2025 at 8:30am EST. The livestream will take place right here on this page.</p>\n<p><strong>Code with Claude - Anthropic's First Developer Conference \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/news/Introducing-code-with-claude\" target=\"_blank\">https://www.anthropic.com/news/Introducing-code-with-claude</a><br>\nApr 03, 2025 Introducing Anthropic's first developer conference: Code with Claude. Apr 3, 2025.You'll learn about our product roadmap, Claude Code, MCP, development methodologies, AI agent implementation strategies, and tool use patterns from the technical teams building Claude-powered applications at leading companies. Event details. When: May 22, 2025.</p>\n<p><strong>Anthropic's Transparency Hub</strong><br>\n<a href=\"https://www.anthropic.com/transparency\" target=\"_blank\">https://www.anthropic.com/transparency</a><br>\nDec 04, 2025 See our Claude Opus 4.5 system card's Section 2 on capabilities. Acceptable Uses, See our Usage Policy. Release date, November 2025. Access ...</p>\n<p><strong>Claude 3.7 Sonnet and Claude Code \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/news/claude-3-7-sonnet?ref=ftt\" target=\"_blank\">https://www.anthropic.com/news/claude-3-7-sonnet?ref=ftt</a><br>\nFeb 27, 2025</p>\n<p><strong>Introducing Claude Opus 4.5 \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/news/claude-opus-4-5\" target=\"_blank\">https://www.anthropic.com/news/claude-opus-4-5</a><br>\n Our newest model, Claude Opus 4.5, is available today. It’s intelligent, efficient, and the best model in the world for coding, agents, and computer use. It’s also meaningfully better at everyday tasks like deep research and working with slides and spreadsheets.</p>\n<p><strong>Introducing Claude 4 \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/news/claude-4\" target=\"_blank\">https://www.anthropic.com/news/claude-4</a><br>\n Today, we’re introducing the next generation of Claude models: Claude Opus 4 and Claude Sonnet 4, setting new standards for coding, advanced reasoning, and AI agents. Claude Opus 4 is the world’s best coding model, with sustained performance on complex, long-running tasks and agent workflows. Claude Sonnet 4 is a significant upgrade to Claude Sonnet 3.7, delivering superior coding and ...</p>\n<p><strong>Claude (language model)</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Claude_(language_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Claude_(language_model)</a>)<br>\n Claude 3 was released on March 4, 2024, and was said by Anthropic to outperform its peers on most of the common evaluation benchmarks for AI systems. The Claude 3 family includes three models in ascending order of capability: Haiku, Sonnet, and Opus.</p>\n<p><strong>Sora 2 is here</strong><br>\n<a href=\"https://openai.com/index/sora-2/\" target=\"_blank\">https://openai.com/index/sora-2/</a><br>\nSep 30, 2025 Today we’re releasing Sora 2, our flagship video and audio generation model. The original Sora model from February 2024 was in many ways the GPT‑1 moment for video—the first time video generation started to seem like it was working, and simple behaviors like object permanence emerged from scaling up pre-training compute. Since then, the Sora team has been focused on training models with more advanced world simulation capabilities.</p>\n<p><strong>Sora 2 System Card</strong><br>\n<a href=\"https://openai.com/index/sora-2-system-card/\" target=\"_blank\">https://openai.com/index/sora-2-system-card/</a><br>\n We’ll continue to learn from how people use Sora 2 and refine the system to balance safety while maximizing creative potential. This system card describes the model’s capabilities, potential risks, and the safety measures OpenAI has developed for a safe deployment of Sora 2.</p>\n<p><strong>Sora is here | OpenAI</strong><br>\n<a href=\"https://openai.com/index/sora-is-here/\" target=\"_blank\">https://openai.com/index/sora-is-here/</a><br>\nDec 09, 2024 Our video generation model, Sora, is now available to use at sora.com. Users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical or square aspect ratios.We developed a new version of Sora—Sora Turbo—that is significantly faster than the model we previewed in February. We’re releasing it today as a standalone product at Sora.com to ChatGPT Plus and Pro users.</p>\n<p><strong>Sora | OpenAI</strong><br>\n<a href=\"https://openai.com/sora/\" target=\"_blank\">https://openai.com/sora/</a><br>\nFeb 15, 2024 The new Sora app. Turn your ideas into videos with hyperreal motion and sound.</p>\n<p><strong>The Walt Disney Company and OpenAI reach landmark agreement to bring ...</strong><br>\n<a href=\"https://openai.com/index/disney-sora-agreement/\" target=\"_blank\">https://openai.com/index/disney-sora-agreement/</a><br>\n The Walt Disney Company and OpenAI have reached an agreement for Disney to become the first major content licensing partner on Sora, OpenAI’s short-form generative AI video platform, bringing these leaders in creativity and innovation together to unlock new possibilities in imaginative storytelling.</p>\n<p><strong>Sora (text-to-video model) - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Sora_(text-to-video_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Sora_(text-to-video_model)</a>)<br>\n Sora is a text-to-video model and social media app developed by OpenAI. The model generates short video clips based on prompts, and can also extend existing short videos. In February 2024, OpenAI previewed examples of its output to the public, [1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024, [2][3] and the ...</p>\n<p><strong>Introducing 4o Image Generation - OpenAI</strong><br>\n<a href=\"https://openai.com/index/introducing-4o-image-generation/\" target=\"_blank\">https://openai.com/index/introducing-4o-image-generation/</a><br>\n At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful.</p>\n<p><strong>Sora - Release Notes</strong><br>\n<a href=\"https://help.openai.com/en/articles/12593142-sora-release-notes\" target=\"_blank\">https://help.openai.com/en/articles/12593142-sora-release-notes</a><br>\n October 15, 2025. Today, we're launching storyboards for Sora 2 in sora.com in beta. Storyboards let you sketch out your video second by second, making it ...</p>\n<p><strong>Sora 2 Release Date: Availability, Invites & New Features ...</strong><br>\n<a href=\"https://skywork.ai/blog/sora-2-release-date-availability-and-how-to-get-invited/\" target=\"_blank\">https://skywork.ai/blog/sora-2-release-date-availability-and-how-to-get-invited/</a><br>\nNov 05, 2025 OpenAI officially announced Sora 2 release date on September 30, 2025, with its initial rollout starting the same day in the United States ...</p>\n<p><strong>OpenAI Sora 2 Launch: Game-Changer for AI Video in 2025</strong><br>\n<a href=\"https://skywork.ai/blog/openai-sora-2-2025/\" target=\"_blank\">https://skywork.ai/blog/openai-sora-2-2025/</a><br>\nOct 09, 2025 OpenAI Sora 2: Why This 2025 Release Could Reshape AI Video Workflows.Capabilities: OpenAI emphasizes improved physics, instruction‑following, and multi‑shot consistency along with synchronized audio (dialogue and SFX). This is a notable shift from earlier “silent” AI video eras (OpenAI — “Sora 2 is here,” 2025). App and platform availability: Sora is rolling out as an iOS app (invite‑only at launch) with access also planned through sora.com after invite acceptance.</p>\n<p><strong>Veo 3.1</strong><br>\n<a href=\"https://deepmind.google/models/veo/\" target=\"_blank\">https://deepmind.google/models/veo/</a><br>\nAug 31, 2016 Veo 3.1. Video, meet audio. Our latest video generation model, designed to empower filmmakers and storytellers.From development to deployment. We built Veo with responsibility and safety in mind. We block harmful requests and results, we test how new features might affect safety, and we have both our own teams and outside experts try to find and fix potential problems before release. It's crucial to introduce technologies such as Veo in a responsible way.</p>\n<p><strong>Gemini Diffusion</strong><br>\n<a href=\"https://deepmind.google/models/gemini-diffusion/\" target=\"_blank\">https://deepmind.google/models/gemini-diffusion/</a><br>\n Gemini Diffusion is our state-of-the-art research model exploring what diffusion means for language – and text generation.</p>\n<p><strong>Gemma</strong><br>\n<a href=\"https://deepmind.google/models/gemma/\" target=\"_blank\">https://deepmind.google/models/gemma/</a><br>\n Hear how the Gemma research team unveil the architecture, design principles, and innovations behind Google's family of lightweight, state-of-the-art open models. Explore the development of intelligent agents using Gemma models, with core components that facilitate agent creation, including capabilities for function calling, planning, and reasoning.</p>\n<p><strong>Project Astra</strong><br>\n<a href=\"https://deepmind.google/models/project-astra/\" target=\"_blank\">https://deepmind.google/models/project-astra/</a><br>\n Project Astra is a research prototype exploring breakthrough capabilities for Google products — on the way to building a universal AI assistant. Its current capabilities include spatial processing, s…</p>\n<p><strong>Project Mariner</strong><br>\n<a href=\"https://deepmind.google/models/project-mariner/\" target=\"_blank\">https://deepmind.google/models/project-mariner/</a><br>\n Project Mariner is a research prototype exploring the future of human-agent interaction, starting with browsers. It automates tasks to help boost productivity.</p>\n<p><strong>Explore Google Veo 3.1 AI Video Generator with Audio</strong><br>\n<a href=\"https://www.yeschat.ai/features/veo-3-1\" target=\"_blank\">https://www.yeschat.ai/features/veo-3-1</a><br>\nOct 31, 2025 What Is Veo 3. Released in May 2025, Veo 3 became Google’s biggest leap in AI video.Is Google Veo 3.1 Coming Soon? Veo 3.1 officially launched on October 15, 2025. The update introduces Scene Extension, First–Last Frame transitions, and Reference to Video, giving creators powerful tools to build longer, more coherent cinematic sequences.</p>\n<p><strong>Gemini AI video generator powered by Veo 3.1</strong><br>\n<a href=\"https://gemini.google/overview/video-generation/\" target=\"_blank\">https://gemini.google/overview/video-generation/</a><br>\n Create high-quality, 8-second videos with Veo 3.1, our latest AI video generator. Simply describe what you have in mind or upload a photo and watch your ...</p>\n<p><strong>Veo (text-to-video model)</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Veo_(text-to-video_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Veo_(text-to-video_model)</a>)<br>\nMar 04, 2025 Veo 3, released in May 2025, can also generate accompanying audio.In May 2025, Google released Veo 3, which not only generates videos but also creates synchronized audio — including dialogue, sound effects, and ambient noise — to match the visuals.[4] Google also announced Flow, a video-creation tool powered by Veo and Imagen.[5][6] Google DeepMind CEO Demis Hassabis described the release as the moment when AI video generation left the.</p>\n<p><strong>Gemini Apps’ release updates & improvements</strong><br>\n<a href=\"https://gemini.google/release-notes/\" target=\"_blank\">https://gemini.google/release-notes/</a><br>\n Explore the latest updates from Gemini Apps - including improvements in generative AI capabilities, expanded access, and more.</p>\n<p><strong>Veo 3 Fast available for everyone on Vertex AI - Google Cloud</strong><br>\n<a href=\"https://cloud.google.com/blog/products/ai-machine-learning/veo-3-fast-available-for-everyone-on-vertex-ai\" target=\"_blank\">https://cloud.google.com/blog/products/ai-machine-learning/veo-3-fast-available-for-everyone-on-vertex-ai</a><br>\n Veo 3 Fast, a model designed for speed and rapid iteration, is now generally available for everyone on Vertex AI. It's a faster way to turn text to video, from narrated product demos to short...</p>\n<p><strong>Grok 3 Beta — The Age of Reasoning Agents</strong><br>\n<a href=\"https://x.ai/news/grok-3\" target=\"_blank\">https://x.ai/news/grok-3</a><br>\nFeb 19, 2025 Try Grok. February 19, 2025.We tested these models on the 2025 American Invitational Mathematics Examination (AIME), which was released just 7 days ago on Feb 12th. With our highest level of test-time compute (cons@64), Grok 3 (Think) achieved 93.3% on this competition. Grok 3 (Think) also attained 84.6% on graduate-level expert reasoning (GPQA), and 79.4% on LiveCodeBench for code generation and problem-solving.</p>\n<p><strong>News | xAI</strong><br>\n<a href=\"https://x.ai/news\" target=\"_blank\">https://x.ai/news</a><br>\n We are thrilled to unveil an early preview of Grok 3, our most advanced model yet, blending superior reasoning with extensive pretraining knowledge.</p>\n<p><strong>Meet Grok by xAI: your truth-seeking AI companion for honest answers...</strong><br>\n<a href=\"https://x.ai/grok\" target=\"_blank\">https://x.ai/grok</a><br>\nMay 17, 2024 SuperGrok Heavy users will also have access to Grok 4 Heavy, for your more challenging tasks. 07 Jul, 2025. Understand documents.export default useDebounce; Do more with Grok. Unlock a SuperGrok subscription on Grok.com. We've just launched SuperGrok Heavy, providing access to Grok Heavy and much higher rate limits.</p>\n<p><strong>Musk says chatbot Grok 3 will be unveiled on February 17 - The Hindu</strong><br>\n<a href=\"https://www.thehindu.com/sci-tech/technology/musk-says-chatbot-grok-3-to-be-unveiled-on-february-17/article69227498.ece\" target=\"_blank\">https://www.thehindu.com/sci-tech/technology/musk-says-chatbot-grok-3-to-be-unveiled-on-february-17/article69227498.ece</a><br>\nFeb 16, 2025 xAI and Grok logos are seen in this illustration taken, February 16, 2025. | Photo Credit: Reuters. Elon Musk said his startup xAI will release its Grok 3 chatbot on Monday (February 17, 2025) and billed it as the \"smartest AI on Earth\" in a fiercely competitive market. The company's flagship artificial intelligence product will go live with a demonstration on Monday (February 17, 2025) night at 8:00 p.m. Pacific time (0400 GMT), the tech billionaire wrote Saturday (February 15, 2025) on his social media platform X.</p>\n<p><strong>Elon Musk’s xAI releases its latest flagship model, Grok 3</strong><br>\n<a href=\"https://techcrunch.com/2025/02/17/elon-musks-ai-company-xai-releases-its-latest-flagship-ai-grok-3/\" target=\"_blank\">https://techcrunch.com/2025/02/17/elon-musks-ai-company-xai-releases-its-latest-flagship-ai-grok-3/</a><br>\nFeb 17, 2025 Members of the xAI team, including Musk (far right), during Grok 3’s livestreamed unveiling.Image Credits:xAI. “Grok 3 is an order of magnitude more capable than Grok 2,” Musk said during a livestreamed presentation on Monday. “ [It’s a] maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” Grok 3 is a family of models, to be precise. A smaller version of Grok 3, Grok 3 mini, responds to questions more quickly at the cost of some accuracy.</p>\n<p><strong>X doubles its Premium+ plan prices after xAI releases Grok 3</strong><br>\n<a href=\"https://techcrunch.com/2025/02/19/x-doubles-its-premium-plan-prices-after-xai-releases-grok-3/\" target=\"_blank\">https://techcrunch.com/2025/02/19/x-doubles-its-premium-plan-prices-after-xai-releases-grok-3/</a><br>\nFeb 18, 2025 3:00 AM PST · February 19, 2025. Hours after Elon Musk’s AI company xAI released the latest version of its AI model, Grok 3, X has significantly hiked the price for its top Premium+ subscription plan, which gives users access to the latest model, to almost $50 per month. On the support page that lists X’s premium plans, the monthly subscription price for Premium+ in the U.S. has been updated to $40, and the yearly price to $350.</p>\n<p><strong>Grok 3 appears to be driving Grok usage to new heights | TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/02/24/grok-3-appears-to-be-driving-grok-usage-to-new-heights/\" target=\"_blank\">https://techcrunch.com/2025/02/24/grok-3-appears-to-be-driving-grok-usage-to-new-heights/</a><br>\nFeb 24, 2025 10:07 AM PST · February 24, 2025. Elon Musk’s AI company, xAI, released Grok 3, its long-awaited flagship AI model, last week. Grok 3 powers the Grok chatbot apps for mobile and the web, as well as the Grok experience on the Musk-owned social network X. Given that there’s so much competition in the AI chatbot space these days, it wasn’t a foregone conclusion that Grok 3 would make much of an impact.</p>\n<p><strong>New Grok 3 release tops LLM leaderboards despite... - Ars Technica</strong><br>\n<a href=\"https://arstechnica.com/ai/2025/02/new-grok-3-release-tops-llm-leaderboards-despite-musk-approved-based-opinions/\" target=\"_blank\">https://arstechnica.com/ai/2025/02/new-grok-3-release-tops-llm-leaderboards-despite-musk-approved-based-opinions/</a><br>\nFeb 19, 2025 They enable this website to offer enhanced and personalized functionalities.xAI plans to add voice synthesis to the Grok app within a week and launch an enterprise API with DeepSearch capabilities in the following weeks. The company says it will also open-source the previous Grok 2 model once Grok 3 stabilizes, which Musk estimates will take several months. This article was updated on February 19, 2025 at 6:53 am to better contextualize Elon Musk’s post about Grok 3.</p>\n<p><strong>Elon Musk's Grok 3 AI Set to Challenge Chatbot Giants</strong><br>\n<a href=\"https://www.fintechweekly.com/magazine/articles/elon-musk-grok-3-launch\" target=\"_blank\">https://www.fintechweekly.com/magazine/articles/elon-musk-grok-3-launch</a><br>\nFeb 17, 2025 xAI’s latest chatbot, Grok 3, is launching on February 17, 2025, at 8:00 PM Pacific Time (February 18, 2025, at 5:00 AM UTC+1) with claims of superior reasoning abilities.</p>\n<p><strong>Top 4 AI breakthroughs in February: Anthropic Claude... - Neudesic</strong><br>\n<a href=\"https://www.neudesic.com/blog/top-ai-news-february-2025/\" target=\"_blank\">https://www.neudesic.com/blog/top-ai-news-february-2025/</a><br>\nFeb 28, 2025 February 28, 2025. Add a header to begin generating the table of contents.Why Grok-3 matters. The expertise to train large language models is highly concentrated among a few individuals across the top AI research labs today. One of the most significant achievements about Grok-3 is that xAI built a highly competitive model less than a year after the company’s founding, highlighting the rapid pace of innovation in an increasingly competitive market. 3. Anthropic’s Claude 3.7 Sonnet: Pioneering hybrid reasoning models.</p>\n<p><strong>The Llama 4 herd: The beginning of a new era of natively ...</strong><br>\n<a href=\"https://ai.meta.com/blog/llama-4-multimodal-intelligence/\" target=\"_blank\">https://ai.meta.com/blog/llama-4-multimodal-intelligence/</a><br>\n The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation. April 5, 2025•. 12 minute read. Takeaways. We're sharing the ...</p>\n<p><strong>Meta releases Llama 4, a new crop of flagship AI models</strong><br>\n<a href=\"https://techcrunch.com/2025/04/05/meta-releases-llama-4-a-new-crop-of-flagship-ai-models/\" target=\"_blank\">https://techcrunch.com/2025/04/05/meta-releases-llama-4-a-new-crop-of-flagship-ai-models/</a><br>\nApr 05, 2025 1:01 PM PDT · April 5, 2025. Meta has released a new collection of AI models, Llama 4, in its Llama family — on a Saturday, no less. There are three new models in total: Llama 4 Scout, Llama 4 Maverick, and Llama 4 Behemoth. All were trained on “large amounts of unlabeled text, image, and video data” to give them “broad visual understanding,” Meta says.</p>\n<p><strong>Meta's vanilla Maverick AI model ranks below rivals on... | TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/04/11/metas-vanilla-maverick-ai-model-ranks-below-rivals-on-a-popular-chat-benchmark/\" target=\"_blank\">https://techcrunch.com/2025/04/11/metas-vanilla-maverick-ai-model-ranks-below-rivals-on-a-popular-chat-benchmark/</a><br>\nApr 12, 2025 One of Meta's newest AI models, Llama 4 Maverick, ranks below rivals on a popular chat benchmark. Meta didn't originally reveal the score.The release version of Llama 4 has been added to LMArena after it was found out they cheated, but you probably didn’t see it because you have to scroll down to 32nd place which is where is ranks pic.twitter.com/A0Bxkdx4LX. — ρ:ɡeσn (@pigeon__s) April 11, 2025.</p>\n<p><strong>Meta previews an API for its Llama AI models | TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/04/29/meta-previews-an-api-for-its-llama-ai-models/\" target=\"_blank\">https://techcrunch.com/2025/04/29/meta-previews-an-api-for-its-llama-ai-models/</a><br>\nApr 29, 2025 10:15 AM PDT · April 29, 2025.Meta said it won’t use Llama API customer data to train the company’s own models and that models built using the Llama API can be transferred to another host. For devs building on top of Meta’s recently released Llama 4 models specifically, the Llama API offers model-serving options via partnerships with Cerebras and Groq.</p>\n<p><strong>Initial impressions of Llama 4</strong><br>\n<a href=\"https://simonwillison.net/2025/Apr/5/llama-4-notes/\" target=\"_blank\">https://simonwillison.net/2025/Apr/5/llama-4-notes/</a><br>\nApr 05, 2025 5th April 2025. Dropping a model release as significant as Llama 4 on a weekend is plain unfair! So far the best place to learn about the new model family is this post on the Meta AI blog. They’ve released two new models today: Llama 4 Maverick is a 400B model (128 experts, 17B active parameters), text and image input with a 1 million token context length.</p>\n<p><strong>Llama (language model)</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Llama_(language_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Llama_(language_model)</a>)<br>\n Llama[a] (\" Large Language Model Meta AI \" serving as a backronym) is a family of large language models (LLMs) released by Meta AI starting in February 2023. [3] Llama models come in different sizes, ranging from 1 billion to 2 trillion parameters.</p>\n<p><strong>Meta releases new AI model Llama 4</strong><br>\n<a href=\"https://www.reuters.com/technology/meta-releases-new-ai-model-llama-4-2025-04-05/\" target=\"_blank\">https://www.reuters.com/technology/meta-releases-new-ai-model-llama-4-2025-04-05/</a><br>\nApr 05, 2025 April 5 (Reuters) - Meta Platforms (META.O) on Saturday released the latest version of its large language model (LLM) Llama, called the Llama 4 Scout and Llama 4 Maverick. Meta...</p>\n<p><strong>Meta Just Unveiled Llama 4 Multimodal AI | Proje Defteri</strong><br>\n<a href=\"https://projedefteri.com/en/blog/llama4-multimodal-ai/\" target=\"_blank\">https://projedefteri.com/en/blog/llama4-multimodal-ai/</a><br>\nApr 06, 2025 Meta Launch Llama 4 (Meta AI Blog). Important Note: Meta has announced a new chapter in the history of artificial intelligence today.Meta officially announced the Llama 4 models and features mentioned in this article on April 5, 2025. Model features and performance may change in the future. This blog may be updated with new developments or error corrections.</p>\n<p><strong>Meta Llama 4 explained: Everything you need to know - TechTarget</strong><br>\n<a href=\"https://www.techtarget.com/WhatIs/feature/Meta-Llama-4-explained-Everything-you-need-to-know\" target=\"_blank\">https://www.techtarget.com/WhatIs/feature/Meta-Llama-4-explained-Everything-you-need-to-know</a><br>\n Learn more about Meta's large language model Llama and about version 4 released in April 2025. Explore the different versions within the family and see how it compares to other LLMs.</p>\n<p><strong>Meta Llama 4 explained: Everything you need to know</strong><br>\n<a href=\"https://www.techtarget.com/whatis/feature/Meta-Llama-4-explained-Everything-you-need-to-know\" target=\"_blank\">https://www.techtarget.com/whatis/feature/Meta-Llama-4-explained-Everything-you-need-to-know</a><br>\nMay 05, 2025 Since Llama 2's release in July 2023, Meta has provided the model under an open permissive license, easing organizational access and use. Its multiple iterations have expanded Llama's capabilities and improved its standing among rivals, including models from OpenAI, Anthropic and Google. On April 5, 2025, Meta released the Llama 4 model family, the Llama 4 herd.</p>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>Gemini 2.0 Flash release date January February 2025</li>\n<li>DeepSeek V3 release December 2024 January 2025</li>\n<li>Mistral AI models 2025 releases</li>\n<li>OpenAI o1 o3 reasoning models 2025</li>\n<li>Google Imagen 3 2025 release</li></ul></summary><br>\nSources (50):<br>\n<strong>Gemini 2.0 Flash | Generative AI on Vertex AI</strong><br>\n<a href=\"https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash\" target=\"_blank\">https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash</a><br>\nApr 09, 2025 Gemini 2.0 Flash Stay organized with collections Save and categorize content based on your preferences. Gemini 2.0 Flash delivers next-generation features and improved capabilities designed for the agentic era, including superior speed, built-in tool use, multimodal generation, and a 1M token context window. Gemini 2.0 Flash improves upon our previous Flash model and offers enhanced quality at similar speeds.Knowledge cutoff date. June 2024. Versions. gemini-2.0-flash-001. Launch stage: GA. Release date: February 5, 2025. Discontinuation date: March 3, 2026.\n\n<p><strong>Gemini 2.0 Flash-Lite | Generative AI on Vertex AI | Google Cloud...</strong><br>\n<a href=\"https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash-lite\" target=\"_blank\">https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-0-flash-lite</a><br>\nApr 07, 2025 Last updated 2025-12-22 UTC. Need to tell us more? [[[\"Easy to understand\",\"easyToUnderstand\",\"thumb-up\"],[\"Solved my problem\",\"solvedMyProblem\",\"thumb-up\"],[\"Other\",\"otherUp\",\"thumb-up\"]],[[\"Hard to understand\",\"hardToUnderstand\",\"thumb-down\"],[\"Incorrect information or sample code\",\"incorrectInformationOrSampleCode\",\"thumb-down\"],[\"Missing the information/samples I need\",\"missingTheInformationSamplesINeed\"</p>\n<p><strong>Model versions and lifecycle | Generative AI on Vertex AI</strong><br>\n<a href=\"https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versions\" target=\"_blank\">https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versions</a><br>\n Latest stable models ; gemini-2.0-flash-001, February 5, 2025, March 3, 2026 ; gemini-2.0-flash-lite-001, February 25, 2025, March 3, 2026 ; gemini-embedding-001 ...</p>\n<p><strong>Gemini 2.0 model updates: 2.0 Flash, Flash-Lite, Pro Experimental</strong><br>\n<a href=\"https://blog.google/technology/google-deepmind/gemini-model-updates-february-2025/\" target=\"_blank\">https://blog.google/technology/google-deepmind/gemini-model-updates-february-2025/</a><br>\nFeb 05, 2025 Try Gemini 2.0 Flash in the Gemini app or the Gemini API in Google AI Studio and Vertex AI. Pricing details can be found in the Google for Developers blog.This table compares the capabilities of different versions of Gemini, including 1.5 Flash, 1.5 Pro, 2.0 Flash-Lite, 2.0 Flash, and 2.0 Pro, across various benchmarks. It shows the performance of each version on tasks like general knowledge, code generation, reasoning, factuality, multilingual understanding, math, long-context understanding, image understanding, audio translation, and video analysis.</p>\n<p><strong>The Gemini app is now powered by Gemini 2.0 Flash.</strong><br>\n<a href=\"https://blog.google/feed/gemini-app-model-update-january-2025/\" target=\"_blank\">https://blog.google/feed/gemini-app-model-update-january-2025/</a><br>\n 2.0 Flash is now rolling out to the Gemini web and mobile app, including our enterprise accounts in the coming days. Gemini 1.5 Flash and 1.5 Pro will remain available for the next few weeks for you to continue your existing conversations.</p>\n<p><strong>Gemini (language model)</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Gemini_(language_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Gemini_(language_model)</a>)<br>\nSep 13, 2023 On January 30, 2025, Google released Gemini 2.0 Flash as the new default model, with Gemini 1.5 Flash still available for usage. This was followed by the release of Gemini 2.0 Pro on February 5, 2025. Additionally, Google released Gemini 2.0 Flash Thinking Experimental, which details the language model's thinking process when responding to prompts.[46].</p>\n<p><strong>Google introduces Gemini 2.0: A new AI model for the agentic era</strong><br>\n<a href=\"https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/\" target=\"_blank\">https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/</a><br>\nDec 11, 2024 Today’s releases mark a new chapter for our Gemini model. With the release of Gemini 2.0 Flash, and the series of research prototypes exploring agentic possibilities, we have reached an exciting milestone in the Gemini era. And we’re looking forward to continuing to safely explore all the new possibilities within reach as we build towards AGI.</p>\n<p><strong>Introducing Gemini 3 Flash: Benchmarks, global availability</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-3-flash/\" target=\"_blank\">https://blog.google/products/gemini/gemini-3-flash/</a><br>\n Gemini 3 Flash offers frontier intelligence built for speed at a fraction of the cost.</p>\n<p><strong>Gemini 2.0 Flash - developer.puter.com</strong><br>\n<a href=\"https://developer.puter.com/encyclopedia/gemini-2-0-flash/\" target=\"_blank\">https://developer.puter.com/encyclopedia/gemini-2-0-flash/</a><br>\nApr 04, 2025 Gemini 2.0 Flash's release positioned it as a competitor to other AI models like OpenAI's GPT series and Claude by Anthropic.2 The Flash Thinking Experimental variant specifically competed with other reasoning models like OpenAI's o-series and DeepSeek's R-series.9 In benchmark comparisons: On AIME2024 (mathematics), Gemini 2.0 Flash Thinking Experimental scored 73.3%, while OpenAI's o3-mini (high-end) scored 87.3%10.</p>\n<p><strong>Release notes | Gemini API - Google AI for Developers</strong><br>\n<a href=\"https://ai.google.dev/gemini-api/docs/changelog\" target=\"_blank\">https://ai.google.dev/gemini-api/docs/changelog</a><br>\nDec 15, 2023 Released gemini-2.5-flash-native-audio-preview-12-2025, a new native audio model for the Live API. This update improves the model's ability to handle complex workflows.August 18, 2025. Released URL context tool to general availability (GA), a tool for providing URLs as additional context to prompts. Support for using URL context with the gemini-2.0-flash model (available during experimental release) will be discontinued in one week.</p>\n<p><strong>DeepSeek-V3.2 Release</strong><br>\n<a href=\"https://api-docs.deepseek.com/news/news251201\" target=\"_blank\">https://api-docs.deepseek.com/news/news251201</a><br>\nDec 01, 2025 Introducing DeepSeek-V3 2024/12/26 · DeepSeek-V2.5-1210 Release ... DeepSeek-V3.2 Release 2025/12/01. DeepSeek-V3.2 Release. Launching ...</p>\n<p><strong>DeepSeek-V3.1 Release</strong><br>\n<a href=\"https://api-docs.deepseek.com/news/news250821\" target=\"_blank\">https://api-docs.deepseek.com/news/news250821</a><br>\nAug 21, 2025 Pricing Changes · New pricing starts & off-peak discounts end at Sep 5th, 2025, 16:00 (UTC Time) · Until then, APIs follow current pricing.</p>\n<p><strong>DeepSeek-V3-0324 Release | DeepSeek API Docs</strong><br>\n<a href=\"https://api-docs.deepseek.com/news/news250325\" target=\"_blank\">https://api-docs.deepseek.com/news/news250325</a><br>\n 📜 Models are now released under the MIT License, just like DeepSeek-R1!</p>\n<p><strong>Change Log | DeepSeek API Docs</strong><br>\n<a href=\"https://api-docs.deepseek.com/updates\" target=\"_blank\">https://api-docs.deepseek.com/updates</a><br>\nOct 13, 2024 Both deepseek-chat and deepseek-reasoner have been upgraded to DeepSeek-V3.2-Exp.The new model has further enhanced its code generation capabilities based on the original Coder model, optimized for common programming application scenarios, and achieved the following results on the standard test set: HumanEval: 89%. LiveCodeBench (January-September): 41%.</p>\n<p><strong>Introducing DeepSeek-V3 | DeepSeek API Docs</strong><br>\n<a href=\"https://api-docs.deepseek.com/news/news1226\" target=\"_blank\">https://api-docs.deepseek.com/news/news1226</a><br>\nDec 26, 2024 Look forward to multimodal support and other cutting-edge features in the DeepSeek ecosystem. 💡 Together, let’s push the boundaries of innovation!</p>\n<p><strong>DeepSeek (chatbot)</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/DeepSeek_(chatbot\" target=\"_blank\">https://en.wikipedia.org/wiki/DeepSeek_(chatbot)</a>)<br>\n DeepSeek is a generative artificial intelligence chatbot developed by the Chinese company DeepSeek. Released on 20 January 2025, DeepSeek-R1 surpassed ...</p>\n<p><strong>DeepSeek-V3 Release: New Open-Source MoE Model</strong><br>\n<a href=\"https://www.helicone.ai/blog/deepseek-v3\" target=\"_blank\">https://www.helicone.ai/blog/deepseek-v3</a><br>\nJan 28, 2025 January 22, 2025 · 7 minute read. Lina Lam· January 22, 2025. On December 26, 2024, DeepSeek officially released a new open-source large language model DeepSeek-V3, a Mixture-of-Experts (MoE) model with 671 billion parameters. Despite being open-source, DeepSeek-V3 shows performance comparable to top models like GPT-4 and Claude 3.5 Sonnet.</p>\n<p><strong>DeepSeek - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/DeepSeek\" target=\"_blank\">https://en.wikipedia.org/wiki/DeepSeek</a><br>\n DeepSeek released its first model, DeepSeek Coder, on 2 November 2023, followed by the DeepSeek-LLM series on 29 November 2023. [36]: section 5 In January 2024, it released two DeepSeek-MoE models (Base and Chat), [37] and in April 3rd DeepSeek-Math models (Base, Instruct, and RL).</p>\n<p><strong>DeepSeek releases improved V3 AI model three months after initial ...</strong><br>\n<a href=\"https://www.notebookcheck.net/DeepSeek-releases-improved-V3-AI-model-three-months-after-initial-launch.989270.0.html\" target=\"_blank\">https://www.notebookcheck.net/DeepSeek-releases-improved-V3-AI-model-three-months-after-initial-launch.989270.0.html</a><br>\n DeepSeek, a Chinese AI company, has released DeepSeek-V3-0324, an updated version of its V3 AI large-language model (LLM), which was originally launched in December 2024.</p>\n<p><strong>2025 United States federal government shutdown - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/2025_United_States_federal_government_shutdown\" target=\"_blank\">https://en.wikipedia.org/wiki/2025_United_States_federal_government_shutdown</a><br>\n On September 30, 2025, President Donald Trump and congressional leaders held discussions that failed to avert a shutdown. Article One of the United States Constitution vests the U.S. Congress with the authority to appropriate funds drawn from the Treasury. [12]: 1 Since 1977, the federal budget process has used a fiscal year that runs from October 1 to September 30 of the following year, with ...</p>\n<p><strong>Introducing Mistral 3</strong><br>\n<a href=\"https://mistral.ai/news/mistral-3\" target=\"_blank\">https://mistral.ai/news/mistral-3</a><br>\n All our new Mistral 3 models, from Large 3 to Ministral 3, were trained on NVIDIA Hopper GPUs to tap high-bandwidth HBM3e memory for frontier-scale workloads.</p>\n<p><strong>Latest news</strong><br>\n<a href=\"https://mistral.ai/en/news\" target=\"_blank\">https://mistral.ai/en/news</a><br>\n Latest updates from Mistral AI. ; Introducing Mistral OCR 3. Achieving a new frontier for both accuracy and efficiency in document processing. December 17, 2025.</p>\n<p><strong>Medium is the new large.</strong><br>\n<a href=\"https://mistral.ai/news/mistral-medium-3\" target=\"_blank\">https://mistral.ai/news/mistral-medium-3</a><br>\n At Mistral AI, we are continuously pushing the frontier for both open models (Mistral Small, Mistral Large, Pixtral, many others) as well as enterprise models (Mistral OCR, Mistral Saba, Ministral 3B / 8B, and more).</p>\n<p><strong>Models - from cloud to edge</strong><br>\n<a href=\"https://mistral.ai/models\" target=\"_blank\">https://mistral.ai/models</a><br>\n Mistral models are available on Azure AI Studio, AWS Bedrock, Google Cloud Model Garden, IBM Watsonx, and Snowflake. Explore cloud ... Mistral AI © 2025.</p>\n<p><strong>Frontier AI LLMs, assistants, agents, services | Mistral AI</strong><br>\n<a href=\"https://mistral.ai/\" target=\"_blank\">https://mistral.ai/</a><br>\nNov 14, 2019 The most powerful AI platform for enterprises. Customize, fine-tune, and deploy AI assistants, autonomous agents, and multimodal AI with open models.Custom AI solutions, deployable anywhere. Build and deploy AI apps with complete control: fine-tune models, develop agents, and run anywhere — from edge to cloud — with enterprise-grade tooling.</p>\n<p><strong>Mistral Small 3</strong><br>\n<a href=\"https://mistral.ai/news/mistral-small-3\" target=\"_blank\">https://mistral.ai/news/mistral-small-3</a><br>\n Mistral Small 3 is a pre-trained and instructed model catered to the ‘80%’ of generative AI tasks—those that require robust language and instruction following performance, with very low latency. We designed this new model to saturate performance at a size suitable for local deployment.</p>\n<p><strong>Introducing: Devstral 2 and Mistral Vibe CLI.</strong><br>\n<a href=\"https://mistral.ai/news/devstral-2-vibe-cli\" target=\"_blank\">https://mistral.ai/news/devstral-2-vibe-cli</a><br>\nDec 09, 2025 Today, we're releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) ... Mistral AI © 2025.</p>\n<p><strong>Models - Mistral Docs</strong><br>\n<a href=\"https://docs.mistral.ai/getting-started/models\" target=\"_blank\">https://docs.mistral.ai/getting-started/models</a><br>\nDec 11, 2023 Our frontier-class multimodal model released May 2025. Our top-tier large model for high-complexity tasks with the lastest version released November 2024. Our first frontier-class multimodal model released November 2024. Our best multilingual open source model released July 2024. Older models that have been deprecated or retired.</p>\n<p><strong>Changelog | Mistral Docs</strong><br>\n<a href=\"https://docs.mistral.ai/getting-started/changelog\" target=\"_blank\">https://docs.mistral.ai/getting-started/changelog</a><br>\nJan 20, 2024 We released Mistral Medium 3.1 (mistral-medium-2508).frequency_penalty: penalizes the repetition of words based on their frequency in the generated text. n: number of completions to return for each request, input tokens are only billed once. We downscaled the temperature parameter of pixtral-12b, ministral-3b-2410, and ministral-8b-2410 by a multiplier of 0.43 to improve consistency, quality, and unify model behavior.</p>\n<p><strong>French AI lab Mistral releases new AI models as it looks to keep pace ...</strong><br>\n<a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html\" target=\"_blank\">https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html</a><br>\nDec 02, 2025 Artificial intelligence startup Mistral released a new suite of models Tuesday as it looks to keep pace with leading AI labs Google, OpenAI and DeepSeek.</p>\n<p><strong>OpenAI for Developers in 2025</strong><br>\n<a href=\"https://developers.openai.com/blog/openai-for-developers-2025\" target=\"_blank\">https://developers.openai.com/blog/openai-for-developers-2025</a><br>\nDec 30, 2025 Models like o1, o3, and o4-mini made it clear that spending extra compute to think before answering could dramatically improve reliability on ...</p>\n<p><strong>Introducing OpenAI o3 and o4-mini</strong><br>\n<a href=\"https://openai.com/index/introducing-o3-and-o4-mini/\" target=\"_blank\">https://openai.com/index/introducing-o3-and-o4-mini/</a><br>\nApr 16, 2025 Today, we’re releasing OpenAI o3 and o4-mini, the latest in our o-series of models trained to think for longer before responding. These are the smartest models we’ve released to date, representing a step change in ChatGPT's capabilities for everyone from curious users to advanced researchers.</p>\n<p><strong>Learning to reason with LLMs | OpenAI</strong><br>\n<a href=\"https://openai.com/index/learning-to-reason-with-llms/\" target=\"_blank\">https://openai.com/index/learning-to-reason-with-llms/</a><br>\nSep 12, 2024 To highlight the reasoning improvement over GPT‑4o, we tested our models on a diverse set of human exams and ML benchmarks. We show that o1 significantly outperforms GPT‑4o on the vast majority of these reasoning-heavy tasks. Unless otherwise specified, we evaluated o1 on the maximal test-time compute setting. Competition math eval (AIME 2024).</p>\n<p><strong>OpenAI o3-mini</strong><br>\n<a href=\"https://openai.com/index/openai-o3-mini/\" target=\"_blank\">https://openai.com/index/openai-o3-mini/</a><br>\n We’re releasing OpenAI o3‑mini, the newest, most cost-efficient model in our reasoning series, available in both ChatGPT and the API today.</p>\n<p><strong>Introducing deep research - OpenAI</strong><br>\n<a href=\"https://openai.com/index/introducing-deep-research/\" target=\"_blank\">https://openai.com/index/introducing-deep-research/</a><br>\n Powered by a version of the upcoming OpenAI o3 model that’s optimized for web browsing and data analysis, it leverages reasoning to search, interpret, and analyze massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters.</p>\n<p><strong>OpenAI o3 - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/OpenAI_o3\" target=\"_blank\">https://en.wikipedia.org/wiki/OpenAI_o3</a><br>\nDec 21, 2024 On January 31, 2025, OpenAI released a smaller model, o3-mini, followed on April 16 by o3 and o4-mini.On January 31, 2025, OpenAI released o3-mini to all ChatGPT users (including free-tier) and some API users. OpenAI describes o3-mini as a \"specialized alternative\" to o1 for \"technical domains requiring precision and speed\".[6] o3-mini features three reasoning effort levels: low, medium and high.</p>\n<p><strong>OpenAI o1 - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/OpenAI_o1\" target=\"_blank\">https://en.wikipedia.org/wiki/OpenAI_o1</a><br>\nJul 26, 2023 OpenAI o1 is a generative pre-trained transformer, the first in OpenAI's \"o\" series of reasoning models. A preview of o1 was released by OpenAI on September 12, 2024. o1 spends time \"thinking\" before it answers, making it better at complex reasoning tasks, science and programming than GPT-4o. The full version was released to ChatGPT users on December 5, 2024.</p>\n<p><strong>Introducing GPT-4.1 in the API - OpenAI</strong><br>\n<a href=\"https://openai.com/index/gpt-4-1/\" target=\"_blank\">https://openai.com/index/gpt-4-1/</a><br>\n Introducing GPT-4.1 in the API—a new family of models with across-the-board improvements, including major gains in coding, instruction following, and long-context understanding. We’re also releasing our first nano model. Available to developers worldwide starting today.</p>\n<p><strong>Reasoning model - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Reasoning_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Reasoning_model</a><br>\nJan 27, 2025 A reasoning model, also known as reasoning language models or large reasoning models, is a type of large language model that has been specifically trained to solve complex tasks requiring multiple steps of logical reasoning.In January 2025, DeepSeek released R1, a reasoning model that achieved performance comparable to OpenAI's o1 at significantly lower computational cost.</p>\n<p><strong>OpenAI o1 vs o3 2025 Reasoning Model Deep Dive - Skywork ai</strong><br>\n<a href=\"https://skywork.ai/blog/llm/openai-o1-vs-o3-2025-reasoning-model-deep-dive/\" target=\"_blank\">https://skywork.ai/blog/llm/openai-o1-vs-o3-2025-reasoning-model-deep-dive/</a><br>\nNov 02, 2025 Evolution from OpenAI o1 to o3. OpenAI o1 vs o3 Core Capability Comparison. Chain-of-Thought Depth Analysis.It tends to reason step by step, which is great for tricky logic, but it adds latency and sometimes overthinks simple stuff. When o3 appeared in my workspace in late October 2025, my first impression was that it felt like o1’s more decisive sibling. Same general idea, reason, then answer, but with tighter control over verbosity and a stronger bias toward “just do it” when the task is straightforward.</p>\n<p><strong>Imagen (text-to-image model)</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Imagen_(text-to-image_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Imagen_(text-to-image_model)</a>)<br>\nJun 30, 2022 Imagen is a series of text-to-image models developed by Google DeepMind. They were developed by Google Brain until the company's merger with DeepMind in April 2023. Imagen is primarily used to generate images from text prompts, similar to Stability AI's Stable Diffusion, OpenAI's DALL-E, or Midjourney.I/O 2025 the company released an improved model, Imagen 4.[8].</p>\n<p><strong>Veo (text-to-video model) - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Veo_(text-to-video_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Veo_(text-to-video_model)</a>)<br>\n Veo, or Google Veo, is a text-to-video model developed by Google DeepMind and announced in May 2024. As a generative AI model, it creates videos based on user prompts. Veo 3, released in May 2025, can also generate accompanying audio.</p>\n<p><strong>Android Developers Blog: Boost user engagement with AI Image...</strong><br>\n<a href=\"https://android-developers.googleblog.com/2025/10/boost-user-engagement-with-ai-image.html\" target=\"_blank\">https://android-developers.googleblog.com/2025/10/boost-user-engagement-with-ai-image.html</a><br>\nOct 13, 2025 Adding custom images to your app can significantly improve and personalize user experience and boost user engagement. This post explores two new capabilities for image generation with Firebase AI Logic: the specialized Imagen editing features, currently in preview, and the general availability of Gemini 2.5 Flash Image (a.k.a \"Nano Banana\"), designed for contextual or conversational image generation. Boost user engagement with images generated via Firebase AI Logic.</p>\n<p><strong>Android Developers Blog: Generate stunning visuals in your Android...</strong><br>\n<a href=\"https://android-developers.googleblog.com/2025/03/imagen3-via-vertex-ai-in-firebase.html\" target=\"_blank\">https://android-developers.googleblog.com/2025/03/imagen3-via-vertex-ai-in-firebase.html</a><br>\nMar 05, 2025 Imagen 3 unlocks exciting new possibilities for Android developers. Generated visuals can adapt to the content of your app, creating a more engaging user experience. For instance, your users can generate custom artwork to enhance their in-app profile. Imagen can also improve your app's storytelling by bringing its narratives to life with delightful personalized illustrations.</p>\n<p><strong>Imagen</strong><br>\n<a href=\"https://deepmind.google/models/imagen/\" target=\"_blank\">https://deepmind.google/models/imagen/</a><br>\n Photorealistic images, sharper clarity, improved spelling and typography. Bring your imagination to life faster than ever before.</p>\n<p><strong>Imagen 3 arrives in the Gemini API - Google Developers Blog</strong><br>\n<a href=\"https://developers.googleblog.com/en/imagen-3-arrives-in-the-gemini-api/\" target=\"_blank\">https://developers.googleblog.com/en/imagen-3-arrives-in-the-gemini-api/</a><br>\n Developers can now access Imagen 3, Google’s state-of-the-art image generation model, through the Gemini API. The model will be initially accessible to paid users, with a rollout to the free tier coming soon.</p>\n<p><strong>Imagen 3 arrives in the Gemini API</strong><br>\n<a href=\"https://developers.googleblog.com/imagen-3-arrives-in-the-gemini-api/\" target=\"_blank\">https://developers.googleblog.com/imagen-3-arrives-in-the-gemini-api/</a><br>\nFeb 06, 2025 Gemini / Google AI Studio. Imagen 3 arrives in the Gemini API. FEB. 6, 2025. Ivan Solovyev Product Manager. Share. Facebook · Twitter · LinkedIn ...</p>\n<p><strong>Gemini Updates: Custom Gems & Improved Image Generation</strong><br>\n<a href=\"https://blog.google/products/gemini/google-gemini-update-august-2024/\" target=\"_blank\">https://blog.google/products/gemini/google-gemini-update-august-2024/</a><br>\nAug 28, 2024 Imagen 3 can create images in various styles, including photorealistic landscapes and textured oil paintings. We've worked to make technical improvements and added safeguards to Imagen 3. We'll gradually roll out the generation of images of people, starting with an early access version for Gemini Advanced, Business, and Enterprise users. Summaries were generated by Google AI.</p>\n<p><strong>60 of Google’s biggest AI announcements and updates in 2025</strong><br>\n<a href=\"https://blog.google/technology/ai/google-ai-news-recap-2025/\" target=\"_blank\">https://blog.google/technology/ai/google-ai-news-recap-2025/</a><br>\n Google had a busy 2025 with many AI product and feature announcements. Gemini updates, AI Mode in Search and new hardware were just a few of the highlights. Look forward to more AI advancements from Google in 2026. Summaries were generated by Google AI. Generative AI is experimental.</p>\n<p><strong>Updates to Veo, Imagen and VideoFX, plus introducing Whisk in...</strong><br>\n<a href=\"https://blog.google/technology/google-labs/video-image-generation-update-december-2024/\" target=\"_blank\">https://blog.google/technology/google-labs/video-image-generation-update-december-2024/</a><br>\nDec 16, 2024 Google has released updated versions of its video and image generation models, Veo 2 and Imagen 3. These models are now available in Google Labs tools, VideoFX and ImageFX, and a new tool called Whisk. Veo 2 generates high-quality videos with improved realism and understanding of cinematography, while Imagen 3 produces brighter, better composed images with more diverse art styles.</p>\n</details>\n<details><summary>Contemplated</summary>\n<p>I'm gathering good information. Let me organize what I have so far and identify gaps:</p>\n<p><strong>January 2025:</strong></p>\n<ul>\n<li>DeepSeek-R1 (Jan 20)</li>\n<li>Qwen 2.5 update (Jan 29)</li>\n<li>MiniMax-Text-01 (January)</li>\n<li>Gemini 2.0 Flash (Jan 30)</li>\n<li>o3-mini (Jan 31)</li>\n</ul>\n<p><strong>February 2025:</strong></p>\n<ul>\n<li>ChatGPT Deep Research (Feb 3)</li>\n<li>Gemini 2.0 Flash GA (Feb 5)</li>\n<li>Gemini 2.0 Pro (Feb 5)</li>\n<li>Mistral Le Chat (Feb 6)</li>\n<li>Grok 3 (Feb 17-19)</li>\n<li>Claude 3.7 Sonnet + Claude Code (Feb 24)</li>\n<li>YandexGPT 5 (Feb 25)</li>\n<li>GPT-4.5 (Feb 27)</li>\n</ul>\n<p><strong>March 2025:</strong></p>\n<ul>\n<li>Gemini 2.5 (Mar 25)</li>\n<li>DeepSeek-V3-0324 (March)</li>\n<li>YandexGPT 5 Lite Instruct (Mar 31)</li>\n</ul>\n<p><strong>April 2025:</strong></p>\n<ul>\n<li>Llama 4 (Apr 5)</li>\n<li>o3 and o4-mini (Apr 16)</li>\n<li>Qwen3 (April)</li>\n</ul>\n<p><strong>May 2025:</strong></p>\n<ul>\n<li>AlphaEvolve (May 14)</li>\n<li>Veo 3 (May 20)</li>\n<li>Claude 4 (May 22)</li>\n</ul>\n<p><strong>July 2025:</strong></p>\n<ul>\n<li>Grok 4 (Jul 9)</li>\n<li>GLM-4.5 (Jul 29)</li>\n</ul>\n<p><strong>August 2025:</strong></p>\n<ul>\n<li>GPT-OSS (Aug 5)</li>\n<li>Claude 4.1 (Aug 5)</li>\n<li>GPT-5 (Aug 7)</li>\n<li>DeepSeek-V3.1 (Aug 21)</li>\n<li>YandexGPT 5.1 Pro (Aug 28)</li>\n</ul>\n<p><strong>September 2025:</strong></p>\n<ul>\n<li>Apertus (Sep 2)</li>\n<li>Claude 4.5 (Sep 29)</li>\n<li>DeepSeek-V3.2-Exp (Sep 29)</li>\n<li>Sora 2 (Sep 30)</li>\n<li>GLM-4.6 (Sep 30)</li>\n</ul>\n<p><strong>October 2025:</strong></p>\n<ul>\n<li>Veo 3.1 (Oct 15)</li>\n<li>Alice AI LLM 1.0 (Oct 28)</li>\n<li>Adobe Firefly Image 5 (Oct 28)</li>\n</ul>\n<p><strong>November 2025:</strong></p>\n<ul>\n<li>Claude Opus 4.5 (November)</li>\n</ul>\n<p><strong>December 2025:</strong></p>\n<ul>\n<li>Mistral 3 (Dec 2)</li>\n<li>DeepSeek-V3.2 (Dec 1)</li>\n<li>Devstral 2 (Dec 9)</li>\n<li>GPT 5.2 (Dec 11)</li>\n<li>Mistral OCR 3 (Dec 17)</li>\n</ul>\n<p>I need more information on:</p>\n<ul>\n<li>Gemini 3 Flash</li>\n<li>GPT-4.1</li>\n<li>More Mistral releases</li>\n<li>Image models (Imagen 4, DALL-E updates)</li>\n<li>Agentic frameworks</li>\n<li>Computer use updates</li>\n<li>More video models</li>\n</ul>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>GPT-4.1 OpenAI April 2025 release</li>\n<li>Gemini 3 Flash Google 2025</li>\n<li>Anthropic computer use 2025</li>\n<li>OpenAI Codex CLI 2025</li>\n<li>Imagen 4 Google 2025 release</li></ul></summary><br>\nSources (50):<br>\n<strong>Introducing GPT-4.1 in the API</strong><br>\n<a href=\"https://openai.com/index/gpt-4-1/\" target=\"_blank\">https://openai.com/index/gpt-4-1/</a><br>\nApr 14, 2025 Introducing GPT-4.1 in the API—a new family of models with across-the-board improvements, including major gains in coding, instruction following, and long-context understanding. We’re also releasing our first nano model. Available to developers worldwide starting today.In OpenAI-MRCR(opens in a new window), the model must answer a question that involves disambiguating between 2, 4, or 8 user prompts scattered amongst distractors. We’re also releasing Graphwalks(opens in a new window), a dataset for evaluating multi-hop long-context reasoning.\n\n<p><strong>Introducing 4o Image Generation - OpenAI</strong><br>\n<a href=\"https://openai.com/index/introducing-4o-image-generation/\" target=\"_blank\">https://openai.com/index/introducing-4o-image-generation/</a><br>\n At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful.</p>\n<p><strong>Introducing GPT-5.2 - OpenAI</strong><br>\n<a href=\"https://openai.com/index/introducing-gpt-5-2/\" target=\"_blank\">https://openai.com/index/introducing-gpt-5-2/</a><br>\n Model performance Economically valuable tasks GPT‑5.2 Thinking is the best model yet for real-world, professional use. On GDPval , an eval measuring well-specified knowledge work tasks across 44 occupations, GPT‑5.2 Thinking sets a new state-of-the-art score, and is our first model that performs at or above a human expert level.</p>\n<p><strong>ChatGPT — Release Notes | OpenAI Help Center</strong><br>\n<a href=\"https://help.openai.com/en/articles/6825453-chatgpt-release-notes\" target=\"_blank\">https://help.openai.com/en/articles/6825453-chatgpt-release-notes</a><br>\nDec 16, 2022 May 14, 2025. Releasing GPT-4.1 in ChatGPT for all paid users. Since its launch in the API in April, GPT-4.1 has become a favorite among developers—by popular demand, we’re making it available directly in ChatGPT. GPT-4.1 is a specialized model that excels at coding tasks. Compared to GPT-4o, it's even stronger at precise instruction following and web development tasks, and offers an alternative to OpenAI o3 and OpenAI o4-mini for simpler, everyday coding needs. Starting today, Plus, Pro, and Team users can access GPT-4.1 via the \"more models\" dropdown in the model picker.</p>\n<p><strong>GPT-4.1</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/GPT-4.1\" target=\"_blank\">https://en.wikipedia.org/wiki/GPT-4.1</a><br>\n GPT-4.1 is a large language model within OpenAI 's GPT series. It was released on April 14, 2025. GPT-4.1 can be accessed through the OpenAI API or the OpenAI Developer Playground. [1][2][3] Three different models were simultaneously released: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. [4]</p>\n<p><strong>Model Release Notes | OpenAI Help Center</strong><br>\n<a href=\"https://help.openai.com/en/articles/9624314-model-release-notes\" target=\"_blank\">https://help.openai.com/en/articles/9624314-model-release-notes</a><br>\nAug 01, 2024 Releasing GPT-4.1 in ChatGPT for all paid users (May 14, 2025). Since its launch in the API in April, GPT-4.1 has become a favorite among developers—by popular demand, we’re making it available directly in ChatGPT. GPT-4.1 is a specialized model that excels at coding tasks. Compared to GPT-4o, it's even stronger at precise instruction following and web development tasks, and offers an alternative to OpenAI o3 and OpenAI o4-mini for simpler, everyday coding needs. Starting today, Plus, Pro, and Team users can access GPT-4.1 via the \"more models\" dropdown in the model picker.</p>\n<p><strong>GPT-4 - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/GPT-4\" target=\"_blank\">https://en.wikipedia.org/wiki/GPT-4</a><br>\n OpenAI has not revealed technical details and statistics about GPT-4, such as the precise size of the model. [5] An early version of GPT-4 was integrated by Microsoft into Bing Chat, launched in February 2023. GPT-4 was released in ChatGPT in March 2023, [1][not verified in body] and removed in 2025. [6] GPT-4 is still available in ...</p>\n<p><strong>ChatGPT - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/ChatGPT\" target=\"_blank\">https://en.wikipedia.org/wiki/ChatGPT</a><br>\n On July 18, 2024, OpenAI released GPT-4o mini, a smaller version of GPT-4o which replaced GPT-3.5 Turbo on the ChatGPT interface. [141] GPT-4o's ability to generate images was released later, in March 2025, when it replaced DALL-E 3 in ChatGPT. [142]</p>\n<p><strong>GPT-5.2 in ChatGPT - OpenAI Help Center</strong><br>\n<a href=\"https://help.openai.com/en/articles/11909943-gpt-5-in-chatgpt\" target=\"_blank\">https://help.openai.com/en/articles/11909943-gpt-5-in-chatgpt</a><br>\n Plus / Business: GPT‑4o, GPT‑4.1, OpenAI o3, OpenAI o4‑mini Pro: GPT‑4o, GPT‑4.1, GPT‑4.5, OpenAI o3, OpenAI o4‑mini If you keep Show additional models turned off, older chats that used these models will continue with the closest GPT‑5.2 equivalent instead. Specifically: o3 will continue in GPT‑5.2 Thinking.</p>\n<p><strong>GPT-4.1 explained: Everything you need to know - TechTarget</strong><br>\n<a href=\"https://www.techtarget.com/WhatIs/feature/GPT-41-explained-Everything-you-need-to-know\" target=\"_blank\">https://www.techtarget.com/WhatIs/feature/GPT-41-explained-Everything-you-need-to-know</a><br>\n OpenAI released a new family of general-purpose models on April 14, 2025. The GPT-4.1 series includes three models with a developer focus – GPT-4.1, GPT-4.1 mini and GPT-4.1 nano. OpenAI is one of the well-known vendors of the generative AI era.</p>\n<p><strong>Gemini 3 Flash</strong><br>\n<a href=\"https://deepmind.google/models/gemini/flash/\" target=\"_blank\">https://deepmind.google/models/gemini/flash/</a><br>\nMay 14, 2024 Gemini 3 Flash has achieved a meaningful step up in reasoning, improving over 7% on Harvey’s BigLaw Bench from its predecessor, Gemini 2.5 Flash. These quality improvements, combined with Flash's low latency, are impactful for high-volume legal tasks such as extracting defined terms and cross-references from contracts.Mathematics AIME 2025. No tools.</p>\n<p><strong>Gemini 3 - Google DeepMind</strong><br>\n<a href=\"https://deepmind.google/models/gemini/\" target=\"_blank\">https://deepmind.google/models/gemini/</a><br>\nDec 06, 2023 Leverage Gemini 3 Flash’s multimodal capabilities in visual recognition and reasoning to add contextual UI on image generations. 3 Flash has the capability to describe the content of the image in a compelling and interactive way.In this slingshot game, Gemini 3 Flash delivers near real-time strategic guidance by simultaneously analyzing the video and hand-tracking inputs.</p>\n<p><strong>Gemini 3 Pro - Google DeepMind</strong><br>\n<a href=\"https://deepmind.google/models/gemini/pro/\" target=\"_blank\">https://deepmind.google/models/gemini/pro/</a><br>\nMay 14, 2024 Our most intelligent model yet. Learn, build, and plan like never before Gemini 3 Pro’s incredible reasoning powers.Gemini 3 is state-of-the-art on reasoning with unprecedented depth and nuance. Our best model for vibe coding and agentic coding. Gemini 3 brings exceptional instruction following – with meaningful improved tool use and agentic coding.</p>\n<p><strong>Introducing Gemini 3 Flash: Benchmarks, global availability</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-3-flash/\" target=\"_blank\">https://blog.google/products/gemini/gemini-3-flash/</a><br>\nDec 17, 2025 Gemini 3 Flash offers Pro-grade reasoning at Flash-level speed and a lower cost. It's great for coding, complex analysis, and quick answers in interactive apps. Gemini 3 Flash is now the default model in the Gemini app and AI Mode in Search. Developers and everyday users can access Gemini 3 Flash via various Google platforms.</p>\n<p><strong>Gemini 3 Flash comes to the Gemini app - The Keyword</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-3-flash-gemini-app/\" target=\"_blank\">https://blog.google/products/gemini/gemini-3-flash-gemini-app/</a><br>\n Starting today, Gemini 3 Flash is rolling out in the Gemini app with “Fast” answering quickly and “Thinking” solving complex problems. Gemini 3 Pro is also available as “Pro” in the model picker and remains the best choice for advanced math and code.</p>\n<p><strong>Build with Gemini 3 Flash, frontier intelligence that scales with you</strong><br>\n<a href=\"https://blog.google/technology/developers/build-with-gemini-3-flash/\" target=\"_blank\">https://blog.google/technology/developers/build-with-gemini-3-flash/</a><br>\n Gemini 3 Flash offers powerful performance at less than a quarter of the cost of Gemini 3 Pro. This new model excels in coding, gaming, deepfake detection, and document analysis. You can access Gemini 3 Flash through Google AI Studio, Antigravity, Vertex AI, and more.</p>\n<p><strong>Here are Google’s latest AI updates from December 2025</strong><br>\n<a href=\"https://blog.google/technology/ai/google-ai-updates-december-2025/\" target=\"_blank\">https://blog.google/technology/ai/google-ai-updates-december-2025/</a><br>\nDec 29, 2025 Here’s a recap of our biggest AI updates from December, including the launch of Gemini 3 Flash, the release of new AI verification tools in the Gemini app and the arrival of Gemini’s powerful translation capabilities in Google Translate.We added new ways for you to personalize, create and share your Google Photos Recap. Google Photos Recap has returned to help you celebrate your favorite moments from 2025, now with more features to make the experience truly yours.</p>\n<p><strong>Gemini app updates: Deep Research, connected apps, personalization</strong><br>\n<a href=\"https://blog.google/products/gemini/new-gemini-app-features-march-2025/\" target=\"_blank\">https://blog.google/products/gemini/new-gemini-app-features-march-2025/</a><br>\nMar 13, 2025 The Gemini app is getting a bunch of new features, including Deep Research and personalization. Gemini 2.0 Flash Thinking Experimental is now available with a longer context window and file upload capabilities. Deep Research is now available for everyone to try and helps you research and synthesize information from the web. Gemini can now connect with your Google apps and services to provide more personalized responses. You can now create your own custom Gems to personalize Gemini for specific tasks.</p>\n<p><strong>Android Developers Blog: Build smarter apps with Gemini 3 Flash</strong><br>\n<a href=\"https://android-developers.googleblog.com/2025/12/build-smarter-apps-with-gemini-3-flash.html\" target=\"_blank\">https://android-developers.googleblog.com/2025/12/build-smarter-apps-with-gemini-3-flash.html</a><br>\nDec 01, 2025 Gemini 3 Flash is also available for AI assistance in Android Studio. While Gemini 3 Pro Preview is our best model for coding and agentic experiences, Gemini 3 Flash is engineered for speed, and great for common development tasks and questions. The new model is rolling out to developers using Gemini in Android Studio at no-cost (default model) starting today.</p>\n<p><strong>Gemini 3 Developer Guide | Gemini API</strong><br>\n<a href=\"https://ai.google.dev/gemini-api/docs/gemini-3\" target=\"_blank\">https://ai.google.dev/gemini-api/docs/gemini-3</a><br>\nNov 18, 2025 Gemini 3 Flash is our latest 3-series model, with Pro-level intelligence at the speed and pricing of Flash. Nano Banana Pro (also known as Gemini 3 Pro Image) is our highest quality image generation model yet.You can try both Gemini 3 Pro and Flash for free in Google AI Studio, but currently, there is no free tier available for gemini-3-pro-preview in the Gemini API. Will my old thinking_budget code still work? Yes, thinking_budget is still supported for backward compatibility, but we recommend migrating to thinking_level for more predictable performance.</p>\n<p><strong>Introducing computer use, a new Claude 3.5 Sonnet, and ...</strong><br>\n<a href=\"https://www.anthropic.com/news/3-5-models-and-computer-use\" target=\"_blank\">https://www.anthropic.com/news/3-5-models-and-computer-use</a><br>\nOct 22, 2024 The upgraded Claude 3.5 Sonnet is now available for all users. Starting today, developers can build with the computer use beta on the Anthropic ...</p>\n<p><strong>Developing a computer use model \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/research/developing-computer-use?ref=thediff.co\" target=\"_blank\">https://www.anthropic.com/research/developing-computer-use?ref=thediff.co</a><br>\nMar 28, 2025 Aug 28, 2025. Claude can now use computers. The latest version of Claude 3.5 Sonnet can, when run through the appropriate software setup, follow a user’s commands to move a cursor around their computer’s screen, click on relevant locations, and input information via a virtual keyboard, emulating the way people interact with their own computer.</p>\n<p><strong>Home \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/\" target=\"_blank\">https://www.anthropic.com/</a><br>\nMar 12, 2014 Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.AI will have a vast impact on the world. Anthropic is a public benefit corporation dedicated to securing its benefits and mitigating its risks. Claude Opus 4.5. Introducing the best model in the world for coding, agents, computer use, and enterprise workflows.</p>\n<p><strong>Developing a computer use model</strong><br>\n<a href=\"https://www.anthropic.com/news/developing-computer-use\" target=\"_blank\">https://www.anthropic.com/news/developing-computer-use</a><br>\n Below, we share some insights from the research that went into developing computer use models—and into making them safer. Why computer use? Why ...</p>\n<p><strong>Introducing Claude Opus 4.5</strong><br>\n<a href=\"https://www.anthropic.com/news/claude-opus-4-5\" target=\"_blank\">https://www.anthropic.com/news/claude-opus-4-5</a><br>\nNov 24, 2025 It's intelligent, efficient, and the best model in the world for coding, agents, and computer use. It's also meaningfully better at everyday ...</p>\n<p><strong>Usage Policy Update \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/news/usage-policy-update\" target=\"_blank\">https://www.anthropic.com/news/usage-policy-update</a><br>\n These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks, as shared in our first threat intelligence report, Detecting and Countering Malicious Uses of Claude: March 2025.</p>\n<p><strong>Building agents with the Claude Agent SDK \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk\" target=\"_blank\">https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk</a><br>\n Published Sep 29, 2025 The Claude Agent SDK is a collection of tools that helps developers build powerful agents on top of Claude Code. In this article, we walk through how to get started and share our best practices.</p>\n<p><strong>Introducing Claude 3.5 Sonnet \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/news/claude-3-5-sonnet\" target=\"_blank\">https://www.anthropic.com/news/claude-3-5-sonnet</a><br>\n Introducing Claude 3.5 Sonnet—our most intelligent model yet. Sonnet now outperforms competitor models and Claude 3 Opus on key evaluations, at twice the speed.</p>\n<p><strong>The Rise of the Digital Intern: How Anthropic’s ‘Computer Use ...</strong><br>\n<a href=\"https://www.financialcontent.com/article/tokenring-2025-12-30-the-rise-of-the-digital-intern-how-anthropics-computer-use-redefined-the-ai-agent-landscape\" target=\"_blank\">https://www.financialcontent.com/article/tokenring-2025-12-30-the-rise-of-the-digital-intern-how-anthropics-computer-use-redefined-the-ai-agent-landscape</a><br>\n In summary, Anthropic’s \"Computer Use\" has transitioned from a daring experiment in late 2024 to an essential pillar of the 2025 digital economy. By teaching Claude to see and interact with the world through the same interfaces humans use, Anthropic has provided a blueprint for the future of work.</p>\n<p><strong>Computer use tool - Claude Docs</strong><br>\n<a href=\"https://platform.claude.com/docs/en/agents-and-tools/tool-use/computer-use-tool\" target=\"_blank\">https://platform.claude.com/docs/en/agents-and-tools/tool-use/computer-use-tool</a><br>\n Computer use is currently in beta and requires a beta header: \"computer-use-2025-11-24\" for Claude Opus 4.5; \"computer-use-2025-01-24\" for Claude Sonnet 4.5 ...</p>\n<p><strong>Codex CLI</strong><br>\n<a href=\"https://developers.openai.com/codex/cli/\" target=\"_blank\">https://developers.openai.com/codex/cli/</a><br>\nApr 17, 2025 Codex CLI is a coding agent that you can run locally from your terminal and that can read, modify, and run code on your machine, in the chosen directory. It’s open source, built in Rust for speed and efficiency, and rapidly improving at openai/codex on GitHub.The Codex CLI officially supports macOS and Linux. Windows support is still experimental — we recommend running in WSL. See the Windows setup guide for step-by-step instructions.</p>\n<p><strong>Codex changelog</strong><br>\n<a href=\"https://developers.openai.com/codex/changelog\" target=\"_blank\">https://developers.openai.com/codex/changelog</a><br>\n Starting today, the CLI and IDE Extension will default to gpt-5.2-codex for users who are signed in with ChatGPT. API access for the model will come soon. You can also use the /model slash command in the CLI. In the Codex IDE Extension you can select GPT-5.2-Codex from the dropdown menu.</p>\n<p><strong>Codex CLI features</strong><br>\n<a href=\"https://developers.openai.com/codex/cli/features/\" target=\"_blank\">https://developers.openai.com/codex/cli/features/</a><br>\n Codex launches into a full-screen terminal UI that can read your repository, make edits, and run commands as you iterate together. Use it whenever you want a ...</p>\n<p><strong>Command line options</strong><br>\n<a href=\"https://developers.openai.com/codex/cli/reference/\" target=\"_blank\">https://developers.openai.com/codex/cli/reference/</a><br>\nOct 28, 2025 This page catalogs every documented Codex CLI command and flag. Use the interactive tables to search by key or description. Each section indicates whether the option is stable or experimental and calls out risky combinations. The CLI inherits most defaults from ~/.codex/config.toml. Any -c key=value overrides you pass at the command line take precedence for that invocation.</p>\n<p><strong>OpenAI for Developers in 2025</strong><br>\n<a href=\"https://developers.openai.com/blog/openai-for-developers-2025/\" target=\"_blank\">https://developers.openai.com/blog/openai-for-developers-2025/</a><br>\nDec 30, 2025 The open-source Codex CLI (GitHub) brought agent-style coding directly into local environments, enabling developers to run Codex over real ...</p>\n<p><strong>Codex IDE extension - developers.openai.com</strong><br>\n<a href=\"https://developers.openai.com/codex/ide\" target=\"_blank\">https://developers.openai.com/codex/ide</a><br>\n Codex is OpenAI’s coding agent that can read, edit, and run code. It helps you build faster, squash bugs, and understand unfamiliar code. With the Codex VS Code extension you can use Codex side-by-side in your IDE, or delegate tasks to the cloud.</p>\n<p><strong>Agent Skills - developers.openai.com</strong><br>\n<a href=\"https://developers.openai.com/codex/skills\" target=\"_blank\">https://developers.openai.com/codex/skills</a><br>\n Use agent skills to extend Codex with task-specific capabilities. A skill packages instructions, resources, and optional scripts so Codex can follow a workflow reliably. You can share skills across teams or with the community. Skills build on the open agent skills standard. Skills are available in both the Codex CLI and IDE extensions. Agent skill definition A skill captures a capability ...</p>\n<p><strong>Introducing Codex</strong><br>\n<a href=\"https://openai.com/index/introducing-codex/\" target=\"_blank\">https://openai.com/index/introducing-codex/</a><br>\nMay 16, 2025 We also envision deeper integrations across the tools you already use: today Codex connects with GitHub, and soon you’ll be able to assign tasks from Codex CLI, ChatGPT Desktop, or even tools such as your issue tracker or CI system.</p>\n<p><strong>Codex | OpenAI</strong><br>\n<a href=\"https://openai.com/codex/\" target=\"_blank\">https://openai.com/codex/</a><br>\nNov 27, 2022 Starting from a prompt or spec, Codex navigates your repo to edit files, run commands, and execute tests. Ship new features, fix bugs, brainstorm solutions, or tackle whatever’s next with the Codex CLI and IDE extension—available in VSCode, Cursor, and Windsurf. Codex interface displaying version and model info (gpt-5), with a command prompt showing “Hey Codex, implement dark mode,” on a blue and purple gradient background with faint code.</p>\n<p><strong>Introducing upgrades to Codex</strong><br>\n<a href=\"https://openai.com/index/introducing-upgrades-to-codex/\" target=\"_blank\">https://openai.com/index/introducing-upgrades-to-codex/</a><br>\nSep 15, 2025 Codex just got faster, more reliable, and better at real-time collaboration and tackling tasks independently anywhere you develop—whether ...</p>\n<p><strong>Imagen</strong><br>\n<a href=\"https://deepmind.google/models/imagen/\" target=\"_blank\">https://deepmind.google/models/imagen/</a><br>\nDec 13, 2023 Imagen 4 is our best text-to-image model yet, with photorealistic images, near real-time speed, and sharper clarity — to bring your imagination to life.We're releasing Imagen 4 with our latest privacy, safety and security features. This includes SynthID, our tool that embeds an invisible digital watermark directly into an image, allowing it to be identified as AI generated.</p>\n<p><strong>Imagen 4 is now available in the Gemini API and Google AI Studio</strong><br>\n<a href=\"https://developers.googleblog.com/en/imagen-4-now-available-in-the-gemini-api-and-google-ai-studio/\" target=\"_blank\">https://developers.googleblog.com/en/imagen-4-now-available-in-the-gemini-api-and-google-ai-studio/</a><br>\nJun 25, 2025 Imagen 4 offers significantly improved text rendering over our prior image models and pushes the boundaries of text-to-image generation quality. The Imagen 4 Family: Imagen 4 and Imagen 4 Ultra. We’re introducing two models within the Imagen 4 family, built to serve a variety of creative needs: Imagen 4: Your go-to for most tasks.24, 2025. Architecting efficient context-aware multi-agent framework for production.</p>\n<p><strong>Announcing Imagen 4 Fast and the general availability of ...</strong><br>\n<a href=\"https://developers.googleblog.com/announcing-imagen-4-fast-and-imagen-4-family-generally-available-in-the-gemini-api/\" target=\"_blank\">https://developers.googleblog.com/announcing-imagen-4-fast-and-imagen-4-family-generally-available-in-the-gemini-api/</a><br>\nAug 15, 2025 Announcing Imagen 4 Fast and the general availability of the Imagen 4 family in the Gemini API. AUG. 15, 2025. Alisa Fortin Product Manager.</p>\n<p><strong>Announcing Imagen 4 Fast and the general availability of the Imagen 4 ...</strong><br>\n<a href=\"https://developers.googleblog.com/en/announcing-imagen-4-fast-and-imagen-4-family-generally-available-in-the-gemini-api/\" target=\"_blank\">https://developers.googleblog.com/en/announcing-imagen-4-fast-and-imagen-4-family-generally-available-in-the-gemini-api/</a><br>\nAug 15, 2025 Discover Imagen 4 Fast, Google's new speed-optimized text-to-image model, now generally available with Imagen 4 and 4 Ultra in the Gemini API.This release marks a significant step forward in text-to-image generation quality, with substantial improvements in text rendering over our previous models. The Imagen 4 family: A model for your creative needs.</p>\n<p><strong>Gemini App: 7 updates from Google I/O 2025</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-app-updates-io-2025/\" target=\"_blank\">https://blog.google/products/gemini/gemini-app-updates-io-2025/</a><br>\nMay 20, 2025 The Gemini app is getting major new updates, from Veo 3 and Imagen 4 to Deep Research and Canvas.The Gemini app has seen incredible momentum recently—and today, we’re introducing new capabilities to help you do even more. Here’s what we announced at Google IO: Gemini Live with camera and screen sharing, is now free on Android and iOS for everyone, so you can point your phone at anything and talk it through. Imagen 4, our new image generation model, comes built in and is known for its image quality, better text rendering and speed.</p>\n<p><strong>Google I/O 2025: 100 things Google announced</strong><br>\n<a href=\"https://blog.google/technology/ai/google-io-2025-all-our-announcements/\" target=\"_blank\">https://blog.google/technology/ai/google-io-2025-all-our-announcements/</a><br>\nMay 22, 2025 Learn more about the biggest announcements and launches from Google’s 2025 I/O developer conference.39. Try it now! Imagen 4 is our latest Imagen model, and it has remarkable clarity in fine details like skin, fur and intricate textures, and excels in both photorealistic and abstract styles. Imagen 4 is available today 2 in the Gemini app.</p>\n<p><strong>Fuel your creativity with new generative media models and tools</strong><br>\n<a href=\"https://blog.google/technology/ai/generative-media-models-io-2025/\" target=\"_blank\">https://blog.google/technology/ai/generative-media-models-io-2025/</a><br>\nMay 20, 2025 Flow is available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon. Imagen 4: Stunning quality and superior typography. Our latest Imagen model combines speed with precision to create stunning images. Imagen 4 has remarkable clarity in fine details like intricate fabrics, water droplets, and animal fur, and excels in both photorealistic and abstract styles.</p>\n<p><strong>Imagen (text-to-image model)</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Imagen_(text-to-image_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Imagen_(text-to-image_model)</a>)<br>\nJun 30, 2022 Imagen is a series of text-to-image models developed by Google DeepMind. They were developed by Google Brain until the company's merger with DeepMind in April 2023. Imagen is primarily used to generate images from text prompts, similar to Stability AI's Stable Diffusion, OpenAI's DALL-E, or Midjourney.I/O 2025 the company released an improved model, Imagen 4.[8].</p>\n<p><strong>Gemini (language model) - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Gemini_(language_model\" target=\"_blank\">https://en.wikipedia.org/wiki/Gemini_(language_model)</a>)<br>\nSep 13, 2023 On January 30, 2025, Google released Gemini 2.0 Flash as the new default model, with Gemini 1.5 Flash still available for usage. This was followed by the release of Gemini 2.0 Pro on February 5, 2025. Additionally, Google released Gemini 2.0 Flash Thinking Experimental, which details the language model's thinking process when responding to prompts.[46].</p>\n<p><strong>Google 2025 recap: Research breakthroughs of the year</strong><br>\n<a href=\"https://blog.google/technology/ai/2025-research-breakthroughs/\" target=\"_blank\">https://blog.google/technology/ai/2025-research-breakthroughs/</a><br>\n In 2025, Google made significant AI research breakthroughs with models like Gemini 3 and Gemma 3. These advancements improved AI's reasoning, multimodality, and efficiency, leading to new products and features across Google's portfolio. Expect more AI-driven innovations in science, computing, and tools for global challenges as Google prioritizes responsible AI development and collaboration.</p>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>Mistral Small 3 January 2025 release</li>\n<li>Mistral Large 3 2025</li>\n<li>Gemma 3 Google 2025 release</li>\n<li>OpenAI Operator agent January 2025</li>\n<li>Adobe Firefly 2025 releases</li></ul></summary><br>\nSources (50):<br>\n<strong>Mistral Small 3</strong><br>\n<a href=\"https://mistral.ai/news/mistral-small-3\" target=\"_blank\">https://mistral.ai/news/mistral-small-3</a><br>\nToday we’re introducing Mistral Small 3, a latency-optimized 24B-parameter model released under the Apache 2.0 license. Mistral Small 3 is competitive with larger models such as Llama 3.3 70B or Qwen 32B, and is an excellent open replacement for opaque proprietary models like GPT4o-mini.\n\n<p><strong>Introducing Mistral 3</strong><br>\n<a href=\"https://mistral.ai/news/mistral-3\" target=\"_blank\">https://mistral.ai/news/mistral-3</a><br>\n Today, we announce Mistral 3, the next generation of Mistral models. Mistral 3 includes three state-of-the-art small, dense models (14B, 8B, ...</p>\n<p><strong>Mistral Small 3: Apache 2.0, 81% MMLU, 150 tokens/s</strong><br>\n<a href=\"https://mistral.ai/news/mistral-small-3?ref=websitehunt.co\" target=\"_blank\">https://mistral.ai/news/mistral-small-3?ref=websitehunt.co</a><br>\nFeb 01, 2025 Jan 30, 2025Mistral AI Team. Today we’re introducing Mistral Small 3, a latency-optimized 24B-parameter model released under the Apache 2.0 license. Up and to the Left. Mistral Small 3 is competitive with larger models such as Llama 3.3 70B or Qwen 32B, and is an excellent open replacement for opaque proprietary models like GPT4o-mini. Mistral Small 3 is on par with Llama 3.3 70B instruct, while being more than 3x faster on the same hardware.</p>\n<p><strong>Mistral Small 3.1</strong><br>\n<a href=\"https://mistral.ai/news/mistral-small-3-1\" target=\"_blank\">https://mistral.ai/news/mistral-small-3-1</a><br>\nMar 17, 2025 Mistral Small 3.1 is released under an Apache 2.0 license. Modern AI applications demand a blend of capabilities—handling text, understanding multimodal inputs, supporting multiple languages, and managing long contexts—with low latency and cost efficiency.</p>\n<p><strong>Mistral Small 3 | Mistral AI | Frontier AI in your hands</strong><br>\n<a href=\"https://mistral.ai/news/mistral-small-3/?_bhlid=f84fafd4d60ea4534b9e60938ec88baf65e77465\" target=\"_blank\">https://mistral.ai/news/mistral-small-3/?_bhlid=f84fafd4d60ea4534b9e60938ec88baf65e77465</a><br>\n Today we’re introducing Mistral Small 3, a latency-optimized 24B-parameter model released under the Apache 2.0 license. Mistral Small 3 is competitive with larger models such as Llama 3.3 70B or Qwen 32B, and is an excellent open replacement for opaque proprietary models like GPT4o-mini.</p>\n<p><strong>Mistral Small 3</strong><br>\n<a href=\"https://simonwillison.net/2025/Jan/30/mistral-small-3/\" target=\"_blank\">https://simonwillison.net/2025/Jan/30/mistral-small-3/</a><br>\nJan 30, 2025 Mistral Small 3 (via) First model release of 2025 for French AI lab Mistral, who describe Mistral Small 3 as \"a latency-optimized 24B-parameter model released under the Apache 2.0 license.\" More notably, they claim the following: Mistral Small 3 is competitive with larger models such as Llama 3.3 70B or Qwen 32B, and is an excellent open replacement for opaque proprietary models like GPT4o-mini.llm install llm-ollama llm -m mistral-small:24b \"say hi\". Posted 30th January 2025 at 3:36 pm.</p>\n<p><strong>Mistral Small 3.1</strong><br>\n<a href=\"https://simonwillison.net/2025/Mar/17/mistral-small-31/\" target=\"_blank\">https://simonwillison.net/2025/Mar/17/mistral-small-31/</a><br>\nMar 17, 2025 Mistral Small 3 came out in January and was a notable, genuinely excellent local model that used an Apache 2.0 license. Mistral Small 3.1 offers a significant improvement: it's multi-modal (images) and has an increased 128,000 token context length, while still \"fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized\" (according to their model card).Update 20th March 2025: I've now run the text version on my laptop using mlx-community/Mistral-Small-3.1-Text-24B-Instruct-2503-8bit and llm-mlx</p>\n<p><strong>Mistral-Small 3.2</strong><br>\n<a href=\"https://simonwillison.net/2025/Jun/20/mistral-small-32/\" target=\"_blank\">https://simonwillison.net/2025/Jun/20/mistral-small-32/</a><br>\nJun 20, 2025 Mistral-Small 3.2. Released on Hugging Face a couple of hours ago, so far there aren't any quantizations to run it on a Mac but I'm sure those will emerge pretty quickly. This is a minor bump to Mistral Small 3.1, one of my favorite local models. I've been running Small 3.1 via Ollama where it's a 15GB download - these 24 billion parameter models are a great balance between capabilities and not using up all of the available RAM on my laptop. I expect Ollama will add 3.2 imminently.</p>\n<p><strong>Mistral AI released Mistral Small 3, a 24B open-source LLM</strong><br>\n<a href=\"https://www.testingcatalog.com/mistral-ai-released-mistral-small-3-a-24b-open-source-llm/\" target=\"_blank\">https://www.testingcatalog.com/mistral-ai-released-mistral-small-3-a-24b-open-source-llm/</a><br>\nJan 31, 2025 Mistral AI has officially released Mistral Small 3, a 24-billion-parameter open-source language model designed for high efficiency and low latency. Announced on January 30, 2025, the model is available under the Apache 2.0 license, allowing developers to freely modify, deploy, and integrate it into various applications. The release positions Mistral Small 3 as a competitive alternative to larger models like Meta’s Llama 3.3 70B and Alibaba’s Qwen 32B, while offering over three times faster performance on the same hardware.</p>\n<p><strong>Mistral Small 3: 24B open-source AI model optimized for speed</strong><br>\n<a href=\"https://www.testingcatalog.com/mistral-small-3-a-24b-open-source-ai-model-optimized-for-speed/\" target=\"_blank\">https://www.testingcatalog.com/mistral-small-3-a-24b-open-source-ai-model-optimized-for-speed/</a><br>\nMar 18, 2025 Mistral AI has announced the release of Mistral Small 3, a 24-billion-parameter open-source language model optimized for low latency and high efficiency.— Mistral AI (@MistralAI) March 17, 2025. The model achieves over 81% accuracy on the Massive Multitask Language Understanding (MMLU) benchmark and processes up to 150 tokens per second, making it one of the most efficient models in its size category. Mistral Small 3 is particularly well-suited for local deployment due to its reduced number of layers, which minimizes latency during inference.</p>\n<p><strong>Introducing Mistral 3</strong><br>\n<a href=\"https://mistral.ai/news/mistral-3\" target=\"_blank\">https://mistral.ai/news/mistral-3</a><br>\nDec 02, 2025 Mistral Large 3 is Mistral’s first mixture-of-experts model since the seminal Mixtral series, and represents a substantial step forward in pretraining at Mistral. After post-training, the model achieves parity with the best instruction-tuned open-weight models on the market on general prompts, while also demonstrating image understanding and best-in-class performance on multilingual conversations (i.e., non-English/Chinese).</p>\n<p><strong>Models - from cloud to edge</strong><br>\n<a href=\"https://mistral.ai/models\" target=\"_blank\">https://mistral.ai/models</a><br>\n Mistral Large 3 is our largest model to date, featuring 41B active parameters and 675B total parameters, with a large 256k ...</p>\n<p><strong>Mistral Large 3 now available on IBM watsonx</strong><br>\n<a href=\"https://www.ibm.com/new/announcements/mistral-large-3-now-available-on-ibm-watsonx\" target=\"_blank\">https://www.ibm.com/new/announcements/mistral-large-3-now-available-on-ibm-watsonx</a><br>\n Today, we’re excited to announce that Mistral Large 3—Mistral AI’s newest state-of-the-art multimodal foundation model—is now available on IBM watsonx.ai.</p>\n<p><strong>Mistral Large 3 - Mistral AI | Mistral Docs</strong><br>\n<a href=\"https://docs.mistral.ai/models/mistral-large-3-25-12\" target=\"_blank\">https://docs.mistral.ai/models/mistral-large-3-25-12</a><br>\nDec 02, 2025 Mistral Large 3, is a state-of-the-art, open-weight, general-purpose multimodal model with a granular Mixture-of-Experts architecture. It features 41B active parameters and 675B total parameters.</p>\n<p><strong>Documentation for the deployment and usage of Mistral AI's LLMs</strong><br>\n<a href=\"https://docs.mistral.ai/getting-started/models\" target=\"_blank\">https://docs.mistral.ai/getting-started/models</a><br>\nDec 11, 2023 Mistral Large 3 icon.Mistral Medium 3.1. Premier. Our frontier-class multimodal model released August 2025. v25.08. Mistral Small 3.2 icon.</p>\n<p><strong>Introducing Mistral Large 3 in Microsoft Foundry: Open, capable, and ...</strong><br>\n<a href=\"https://azure.microsoft.com/en-us/blog/introducing-mistral-large-3-in-microsoft-foundry-open-capable-and-ready-for-production-workloads/\" target=\"_blank\">https://azure.microsoft.com/en-us/blog/introducing-mistral-large-3-in-microsoft-foundry-open-capable-and-ready-for-production-workloads/</a><br>\nDec 02, 2025 According to Mistral, Mistral Large 3 shows fewer breakdowns and more consistent behavior than most peers, especially in multi-turn conversations and complex, extended inputs. It is designed for production, not just experimentation. Mistral 3 is optimized for real-world scenarios. Instruction reliability you can depend on. Many open models excel on benchmarks but struggle with instruction clarity when deployed in real workflows. Mistral Large 3 reverses that trend by demonstrating</p>\n<p><strong>Introducing Mistral-Large on Azure in partnership with Mistral AI</strong><br>\n<a href=\"https://azure.microsoft.com/en-us/blog/microsoft-and-mistral-ai-announce-new-partnership-to-accelerate-ai-innovation-and-introduce-mistral-large-first-on-azure/\" target=\"_blank\">https://azure.microsoft.com/en-us/blog/microsoft-and-mistral-ai-announce-new-partnership-to-accelerate-ai-innovation-and-introduce-mistral-large-first-on-azure/</a><br>\nFeb 26, 2024 “After exploring Mistral Large during its early access period, we’ve been impressed by its performance on medical terminology. As we continue to innovate in healthcare, we’re open to collaborations that can help us and our partners grow together. Mistral AI represents an exciting opportunity for mutual advancement in artificial intelligence, both in France and internationally.”</p>\n<p><strong>Mistral Large 3 and Ministral 3 family now available first on ...</strong><br>\n<a href=\"https://aws.amazon.com/about-aws/whats-new/2025/12/mistral-large-3-ministral-3-family-available-amazon-bedrock/\" target=\"_blank\">https://aws.amazon.com/about-aws/whats-new/2025/12/mistral-large-3-ministral-3-family-available-amazon-bedrock/</a><br>\nDec 02, 2025 Mistral Large 3 and Ministral 3 family now available first on Amazon Bedrock. Posted on: Dec 2, 2025. Customers can now use Mistral Large 3 ...</p>\n<p><strong>What is Mistral Large 3? an in-depth explainer - CometAPI</strong><br>\n<a href=\"https://www.cometapi.com/en/what-is-mistral-large-3-an-in-depth-explainer/\" target=\"_blank\">https://www.cometapi.com/en/what-is-mistral-large-3-an-in-depth-explainer/</a><br>\n Mistral Large 3 is the newest “frontier” model family released by Mistral AI in early December 2025.</p>\n<p><strong>What is Mistral Large 3? an in-depth explainer - CometAPI - All AI...</strong><br>\n<a href=\"https://www.cometapi.com/what-is-mistral-large-3-an-in-depth-explainer/\" target=\"_blank\">https://www.cometapi.com/what-is-mistral-large-3-an-in-depth-explainer/</a><br>\nDec 13, 2025 Mistral Large 3 is the newest “frontier” model family released by Mistral AI in early December 2025. It’s an open-weight, production-oriented, multimodal foundation model built around a granular sparse Mixture-of-Experts (MoE) design and intended to deliver “frontier” reasoning, long-context understanding, and vision + text capabilities while keeping inference practical through sparsity and modern quantization.</p>\n<p><strong>Gemma releases | Google AI for Developers</strong><br>\n<a href=\"https://ai.google.dev/gemma/docs/releases\" target=\"_blank\">https://ai.google.dev/gemma/docs/releases</a><br>\nApr 09, 2024 Release of Gemma 3 in 270M size. Release of T5Gemma across different parameter sizes. Release of MedGemma 27B parameter multimodal model. Release of Gemma 3n in E2B and E4B sizes. Release of MedGemma in 4B and 27B parameter sizes. Release of Gemma 3 in 1B, 4B, 12B and 27B sizes. Release of ShieldGemma 2.</p>\n<p><strong>Gemma 3 model overview</strong><br>\n<a href=\"https://ai.google.dev/gemma/docs/core\" target=\"_blank\">https://ai.google.dev/gemma/docs/core</a><br>\nJun 27, 2024 Gemma 3 model overview. Gemma is a family of generative artificial intelligence (AI) models and you can use them in a wide variety of generation tasks, including question answering, summarization, and reasoning. Gemma models are provided with open weights and permit responsible commercial use, allowing you to tune and deploy them in your own projects and applications. The Gemma 3 release includes the following key features</p>\n<p><strong>Gemma 3 model card | Google AI for Developers</strong><br>\n<a href=\"https://ai.google.dev/gemma/docs/core/model_card_3\" target=\"_blank\">https://ai.google.dev/gemma/docs/core/model_card_3</a><br>\nMar 12, 2025 Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions.</p>\n<p><strong>google/gemma-3-270m · Hugging Face</strong><br>\n<a href=\"https://huggingface.co/google/gemma-3-270m\" target=\"_blank\">https://huggingface.co/google/gemma-3-270m</a><br>\nAug 14, 2025 To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.</p>\n<p><strong>Welcome Gemma 3: Google's all new multimodal, multilingual, long...</strong><br>\n<a href=\"https://huggingface.co/blog/gemma3\" target=\"_blank\">https://huggingface.co/blog/gemma3</a><br>\nMar 12, 2025</p>\n<p><strong>google/gemma-3-270m-it · Hugging Face</strong><br>\n<a href=\"https://huggingface.co/google/gemma-3-270m-it\" target=\"_blank\">https://huggingface.co/google/gemma-3-270m-it</a><br>\nAug 14, 2025 To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.</p>\n<p><strong>google/gemma-3-27b-it</strong><br>\n<a href=\"https://huggingface.co/google/gemma-3-27b-it\" target=\"_blank\">https://huggingface.co/google/gemma-3-27b-it</a><br>\nAug 11, 2025 Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions.</p>\n<p><strong>Gemma 3: Google's new open model based on Gemini 2.0</strong><br>\n<a href=\"https://blog.google/technology/developers/gemma-3/\" target=\"_blank\">https://blog.google/technology/developers/gemma-3/</a><br>\nMar 12, 2025 Gemma 3 is a collection of lightweight, state-of-the-art open models designed to run on devices. Gemma 3 outperforms other models in its size class, making it ideal for single-GPU or TPU applications. Gemma 3 supports over 140 languages and offers advanced text and visual reasoning capabilities.</p>\n<p><strong>Google 2025 recap: Research breakthroughs of the year</strong><br>\n<a href=\"https://blog.google/technology/ai/2025-research-breakthroughs/\" target=\"_blank\">https://blog.google/technology/ai/2025-research-breakthroughs/</a><br>\n In 2025, Google made significant AI research breakthroughs with models like Gemini 3 and Gemma 3. These advancements improved AI's reasoning, multimodality, and efficiency, leading to new products and features across Google's portfolio. Expect more AI-driven innovations in science, computing, and tools for global challenges as Google prioritizes responsible AI development and collaboration.</p>\n<p><strong>Introducing Gemma 3 270M: The compact model for hyper- ...</strong><br>\n<a href=\"https://developers.googleblog.com/en/introducing-gemma-3-270m/\" target=\"_blank\">https://developers.googleblog.com/en/introducing-gemma-3-270m/</a><br>\nAug 14, 2025 We introduced Gemma 3 and Gemma 3 QAT, delivering state-of-the-art performance for single cloud and desktop accelerators. Then, we announced the full release of Gemma 3n, a mobile-first architecture bringing powerful, real-time multimodal AI directly to edge devices.</p>\n<p><strong>Introducing Operator</strong><br>\n<a href=\"https://openai.com/index/introducing-operator/\" target=\"_blank\">https://openai.com/index/introducing-operator/</a><br>\nJan 23, 2025 January 23, 2025. Product.July 17, 2025 update: Operator is now fully integrated into ChatGPT as ChatGPT agent. To access these updated capabilities, simply select “agent mode” from the dropdown in the composer and enter your query directly within ChatGPT. As a result, the standalone Operator site (operator.chatgpt.com) will sunset on in the coming weeks.</p>\n<p><strong>Computer-Using Agent | OpenAI</strong><br>\n<a href=\"https://openai.com/index/computer-using-agent/\" target=\"_blank\">https://openai.com/index/computer-using-agent/</a><br>\nJan 23, 2025 Safety. Conclusion. January 23, 2025. Release.Powering Operator is Computer-Using Agent (CUA), a model that combines GPT‑4o's vision capabilities with advanced reasoning through reinforcement learning. CUA is trained to interact with graphical user interfaces (GUIs)—the buttons, menus, and text fields people see on a screen—just as humans do.</p>\n<p><strong>Introducing deep research - OpenAI</strong><br>\n<a href=\"https://openai.com/index/introducing-deep-research/\" target=\"_blank\">https://openai.com/index/introducing-deep-research/</a><br>\n July 17, 2025 update: Deep research can now go even deeper and broader with access to a visual browser as part of ChatGPT agent. To access these updated capabilities, simply select “agent mode” from the dropdown in the composer and enter your query directly. The original deep research functionality remains available via the “deep research” option in the tools menu.</p>\n<p><strong>Live 23rd January '25: Introduction to Operator & Agents</strong><br>\n<a href=\"https://community.openai.com/t/live-23rd-january-25-introduction-to-operator-agents/1101532\" target=\"_blank\">https://community.openai.com/t/live-23rd-january-25-introduction-to-operator-agents/1101532</a><br>\nJan 23, 2025 dominickwirzba January 23, 2025, 8:04pm 7. I really like what I see in this demo. It directly addresses some of the biggest challenges I’ve faced over the past few months, and I can already see how Operator could be a game-changer for many workflows.Fair enough as OpenAI is a U.S. company. Any ETA when it becomes avail in Australia? grandell1234 January 24, 2025, 5:00pm 22.</p>\n<p><strong>Everything you need to know about Operator, OpenAI's new AI agent | IT Pro</strong><br>\n<a href=\"https://www.itpro.com/technology/artificial-intelligence/openai-unveils-its-operator-agent-to-help-users-automate-tasks-heres-everything-you-need-to-know\" target=\"_blank\">https://www.itpro.com/technology/artificial-intelligence/openai-unveils-its-operator-agent-to-help-users-automate-tasks-heres-everything-you-need-to-know</a><br>\n Only a few weeks into the year, OpenAI has unveiled Operator, an agent that uses its own web browser to perform tasks, such as typing, clicking and scrolling, the company explained in a blog post.</p>\n<p><strong>OpenAI launches Operator, an AI agent that performs tasks ...</strong><br>\n<a href=\"https://techcrunch.com/2025/01/23/openai-launches-operator-an-ai-agent-that-performs-tasks-autonomously/\" target=\"_blank\">https://techcrunch.com/2025/01/23/openai-launches-operator-an-ai-agent-that-performs-tasks-autonomously/</a><br>\nJan 23, 2025 10:00 AM PST · January 23, 2025. OpenAI CEO Sam Altman kicked off this year by saying in a blog post that 2025 would be big for AI agents, tools that can automate tasks and take actions on your behalf. Now, we’re seeing OpenAI’s first real attempt. OpenAI announced on Thursday that it is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator is coming to U.S. users on ChatGPT’s $200 Pro subscription plan first.</p>\n<p><strong>OpenAI may preview its agent tool for users on the... | TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/01/23/openais-agent-tool-will-be-available-to-users-paying-200-per-month-for-pro/\" target=\"_blank\">https://techcrunch.com/2025/01/23/openais-agent-tool-will-be-available-to-users-paying-200-per-month-for-pro/</a><br>\nJan 23, 2025 We may see OpenAI’s agent tool, Operator, released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code.</p>\n<p><strong>OpenAI's agent tool may be nearing release | TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/01/20/openais-agent-tool-may-be-nearing-release/\" target=\"_blank\">https://techcrunch.com/2025/01/20/openais-agent-tool-may-be-nearing-release/</a><br>\nJan 21, 2025 — Tibor Blaho (@btibor91) January 19, 2025. According to Blaho, OpenAI’s site also contains not-yet-public tables comparing the performance of Operator to other computer-using AI systems. The tables may well be placeholders. But if the numbers are accurate, they suggest that Operator isn’t 100% reliable, depending on the task. OpenAI website already has references to Operator/OpenAI CUA (Computer Use Agent) – “Operator System Card Table”, “Operator Research Eval Table” and “Operator Refusal Rate Table”.</p>\n<p><strong>ChatGPT: Everything you need to know about the AI chatbot</strong><br>\n<a href=\"https://techcrunch.com/2025/12/22/chatgpt-everything-to-know-about-the-ai-chatbot/\" target=\"_blank\">https://techcrunch.com/2025/12/22/chatgpt-everything-to-know-about-the-ai-chatbot/</a><br>\nDec 22, 2025 OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and ...</p>\n<p><strong>A comprehensive list of 2025 tech layoffs - TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/12/22/tech-layoffs-2025-list/\" target=\"_blank\">https://techcrunch.com/2025/12/22/tech-layoffs-2025-list/</a><br>\nDec 22, 2025 A complete list of all the known layoffs in tech, from Big Tech to startups, broken down by month throughout 2024 and 2025.</p>\n<p><strong>What's new in Adobe Firefly</strong><br>\n<a href=\"https://helpx.adobe.com/firefly/web/whats-new/new-features/whats-new.html\" target=\"_blank\">https://helpx.adobe.com/firefly/web/whats-new/new-features/whats-new.html</a><br>\n Explore the new and enhanced features in the latest releases of Adobe Firefly. Bring the power of Firefly to your iOS device and generate stunning visuals on the go. The Firefly iOS app gives you fast, intuitive tools to create images, videos, sounds, and edits wherever inspiration strikes.</p>\n<p><strong>Adobe Firefly: The next evolution of creative AI is here</strong><br>\n<a href=\"https://blog.adobe.com/en/publish/2025/04/24/adobe-firefly-next-evolution-creative-ai-is-here\" target=\"_blank\">https://blog.adobe.com/en/publish/2025/04/24/adobe-firefly-next-evolution-creative-ai-is-here</a><br>\nApr 24, 2025 The latest Firefly release sets a new standard for visual content generation, with Firefly Image Model 4 delivering unmatched definition and realism for high-resolution images, while the Firefly Video Model enables dynamic, commercially safe video creation.</p>\n<p><strong>Explore the new Adobe Firefly, your all-in-one creative AI ...</strong><br>\n<a href=\"https://blog.adobe.com/en/publish/2025/10/28/explore-new-adobe-firefly-your-all-in-one-home-ai-powered-creativity\" target=\"_blank\">https://blog.adobe.com/en/publish/2025/10/28/explore-new-adobe-firefly-your-all-in-one-home-ai-powered-creativity</a><br>\nOct 28, 2025 The all-new Adobe Firefly, launched at MAX 2025, is your one-stop creative space that offers access to top AI models and helps you brainstorm, generate, create and edit amazing videos, images, sound and more – all in one place.</p>\n<p><strong>Adobe Firefly improves AI video creation with new tools ...</strong><br>\n<a href=\"https://blog.adobe.com/en/publish/2025/12/16/adobe-firefly-improves-ai-video-creation-tools-new-models-unlimited-generations\" target=\"_blank\">https://blog.adobe.com/en/publish/2025/12/16/adobe-firefly-improves-ai-video-creation-tools-new-models-unlimited-generations</a><br>\nDec 16, 2025 Meta description: Adobe announces new Adobe Firefly video AI updates—adding precision “Prompt to Edit” and camera-motion controls, Topaz Astra upscaling in Firefly Boards, the public beta of Firefly video editor, and a new unlimited generations promotion for eligible Firefly plans through January 15</p>\n<p><strong>Ideate and create faster with new Creative Cloud and Firefly ...</strong><br>\n<a href=\"https://blog.adobe.com/en/publish/2025/06/17/create-ideate-faster-with-new-creative-cloud-firefly-innovations\" target=\"_blank\">https://blog.adobe.com/en/publish/2025/06/17/create-ideate-faster-with-new-creative-cloud-firefly-innovations</a><br>\nJun 17, 2025 Discover the brand new features and updates coming to Lightroom, Photoshop, Illustrator, and Adobe Firefly—all designed to help you move from idea to final delivery faster than ever.</p>\n<p><strong>New at MAX London: A more powerful Creative Cloud, new Firefly AI ...</strong><br>\n<a href=\"https://blog.adobe.com/en/publish/2025/04/24/new-max-london-more-powerful-creative-cloud-new-firefly-ai-tools-more-support-creative-careers\" target=\"_blank\">https://blog.adobe.com/en/publish/2025/04/24/new-max-london-more-powerful-creative-cloud-new-firefly-ai-tools-more-support-creative-careers</a><br>\nApr 24, 2025 Today at Adobe MAX in London, we unveiled the latest Adobe Creative Cloud, packed with new AI tools in Firefly, time-saving features, and performance boosts.</p>\n<p><strong>Firefly adds new video capabilities, industry leading AI ... - Adobe Blog</strong><br>\n<a href=\"https://blog.adobe.com/en/publish/2025/07/17/firefly-adds-new-video-capabilities-industry-leading-ai-models-generate-sound-effects-feature\" target=\"_blank\">https://blog.adobe.com/en/publish/2025/07/17/firefly-adds-new-video-capabilities-industry-leading-ai-models-generate-sound-effects-feature</a><br>\nJul 17, 2025 Today, we’re introducing powerful enhancements to our Firefly Video Model, including improved motion fidelity and advanced video controls that will accelerate your workflows and provide the precision and style you need to elevate your storytelling.</p>\n<p><strong>Adobe Firefly Delivers Groundbreaking AI Audio, Video and Imaging...</strong><br>\n<a href=\"https://news.adobe.com/news/2025/10/adobe-max-2025-firefly\" target=\"_blank\">https://news.adobe.com/news/2025/10/adobe-max-2025-firefly</a><br>\nOct 28, 2025 Adobe Firefly, the all-in-one AI creative studio for multimedia, now brings creators fully licensed audio track creation, crystal clear voiceover generation and an end-to-end timeline-based video editor. Adobe Firefly brings creators the top AI models in one place, at one price – including the new Firefly Image Model 5 for photorealism and prompt-based editing; new partner models from ElevenLabs, Google and Topaz Labs; and new Firefly Custom Models that generate in creators’ own personal style.</p>\n<p><strong>Adobe Delivers New AI Innovations, Assistants and Models Across...</strong><br>\n<a href=\"https://news.adobe.com/news/2025/10/adobe-max-2025-creative-cloud\" target=\"_blank\">https://news.adobe.com/news/2025/10/adobe-max-2025-creative-cloud</a><br>\nOct 28, 2025 Adobe is bringing Firefly Custom Models (private beta) to creative professionals, directly in the Firefly app, with early access to the beta through a waitlist. First introduced to Firefly enterprise customers, Firefly Custom Models lets creative professionals easily personalize their own AI models to generate a series of assets with visual consistency in their unique style. Personalizing Firefly Custom Models is as simple as dragging and dropping images that creative professionals have the right to use.</p>\n<p><strong>FOR IMMEDIATE RELEASE Adobe Delivers Creative Pros Unparalleled Speed ...</strong><br>\n<a href=\"https://news.adobe.com/news/downloads/pdfs/2025/04/max-london-creative-cloud-release-2025-1.pdf\" target=\"_blank\">https://news.adobe.com/news/downloads/pdfs/2025/04/max-london-creative-cloud-release-2025-1.pdf</a><br>\nApr 01, 2025 Creative pros can explore and ideate with the new Firefly app – including the new Firefly Boards for AI-first moodboarding – with Adobe’s commercially safe Firefly models and choice in partner models to explore concepts and create images, vectors, audio and video</p>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>Qwen 2.5 Max January 2025</li>\n<li>Baidu ERNIE 2025 release</li>\n<li>ByteDance Doubao 2025</li>\n<li>Cohere Command R 2025</li>\n<li>Stability AI 2025 releases</li></ul></summary><br>\nSources (50):<br>\n<strong>Qwen2.5-Max</strong><br>\n<a href=\"https://qwen.ai/blog?id=qwen2.5-max\" target=\"_blank\">https://qwen.ai/blog?id=qwen2.5-max</a><br>\nQwen Chat offers comprehensive functionality spanning chatbot, image and video understanding, image generation, document processing, ...\n\n<p><strong>Qwen</strong><br>\n<a href=\"https://qwen.ai/\" target=\"_blank\">https://qwen.ai/</a><br>\n Qwen VLo empowers your creative projects by transforming ideas into stunning visuals — now with deeper understanding and precise control.Modify images, transfer ...</p>\n<p><strong>Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model</strong><br>\n<a href=\"https://qwenlm.github.io/blog/qwen2.5-max/\" target=\"_blank\">https://qwenlm.github.io/blog/qwen2.5-max/</a><br>\n Therefore, we evaluate Qwen2.5-Max against DeepSeek V3, a leading open-weight MoE model, Llama-3.1-405B, the largest open-weight dense model, and Qwen2.5-72B, which is also among the top open-weight dense models. The results of this comparison are presented below.</p>\n<p>**<a href=\"https://arxiv.org/abs/2412.15115\" target=\"_blank\">https://arxiv.org/abs/2412.15115</a>15115](https://arxiv.org/abs/2412.15115)<br>\n In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages.</p>\n<p><strong>Qwen2.5: A Party of Foundation Models!</strong><br>\n<a href=\"https://qwenlm.github.io/blog/qwen2.5/\" target=\"_blank\">https://qwenlm.github.io/blog/qwen2.5/</a><br>\nSep 19, 2024 Our latest release features the LLMs Qwen2.5, along with specialized models for coding, Qwen2.5-Coder, and mathematics, Qwen2.5-Math.</p>\n<p><strong>Qwen2.5 Technical Report - arXiv.org</strong><br>\n<a href=\"https://arxiv.org/pdf/2412.15115v2\" target=\"_blank\">https://arxiv.org/pdf/2412.15115v2</a><br>\n In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages.</p>\n<p>**<a href=\"https://arxiv.org/abs/2502.13923\" target=\"_blank\">https://arxiv.org/abs/2502.13923</a>v.org/abs/2502.13923](https://arxiv.org/abs/2502.13923)<br>\n We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language series, which demonstrates significant advancements in both foundational capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap forward in understanding and interacting with the world through enhanced visual recognition, precise object localization, robust document parsing, and long-video ...</p>\n<p><strong>Qwen2.5-Max - Qwen2</strong><br>\n<a href=\"https://qwen2.org/qwen2-5-max/\" target=\"_blank\">https://qwen2.org/qwen2-5-max/</a><br>\n Qwen2.5-Max is now available in Qwen Chat! You can chat directly with the model, explore artifacts, perform searches, and more. Try it out and experience its capabilities firsthand! The API of Qwen2.5-Max (whose model name is qwen-max-2025-01-25) is available.</p>\n<p><strong>Qwen/Qwen2.5-VL-7B-Instruct</strong><br>\n<a href=\"https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct\" target=\"_blank\">https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct</a><br>\n Capable of visual localization in different formats: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can ...</p>\n<p><strong>Qwen/Qwen2.5-VL-7B-Instruct-AWQ · Hugging Face</strong><br>\n<a href=\"https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct-AWQ\" target=\"_blank\">https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct-AWQ</a><br>\nFeb 20, 2025 min_pixels = 256 * 28 * 28 max_pixels = 1280 * 28 * 28 processor = AutoProcessor.from_pretrained( \"Qwen/Qwen2.5-VL-7B-Instruct-AWQ\", min_pixels=min_pixels, max_pixels=max_pixels ). Besides, We provide two methods for fine-grained control over the image size input to the model: Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels. Specify exact dimensions: Directly set resized_height and resized_width. These values will be rounded to the nearest multiple of 28.</p>\n<p><strong>Baidu Unveils ERNIE 5.0 and a Series of AI Applications at ...</strong><br>\n<a href=\"https://www.prnewswire.com/news-releases/baidu-unveils-ernie-5-0-and-a-series-of-ai-applications-at-baidu-world-2025--ramps-up-global-push-302614531.html\" target=\"_blank\">https://www.prnewswire.com/news-releases/baidu-unveils-ernie-5-0-and-a-series-of-ai-applications-at-baidu-world-2025--ramps-up-global-push-302614531.html</a><br>\nNov 13, 2025 BEIJING, Nov. 13, 2025 /PRNewswire/ -- Baidu, Inc. (NASDAQ: BIDU and HKEX: 9888), a leading AI company with a strong internet foundation, today unveiled the natively omni-modal foundation model,...</p>\n<p><strong>Baidu Unveils ERNIE 5.0 and a Series of AI Applications at Baidu...</strong><br>\n<a href=\"https://www.barchart.com/story/news/36098040/baidu-unveils-ernie-5-0-and-a-series-of-ai-applications-at-baidu-world-2025-ramps-up-global-push\" target=\"_blank\">https://www.barchart.com/story/news/36098040/baidu-unveils-ernie-5-0-and-a-series-of-ai-applications-at-baidu-world-2025-ramps-up-global-push</a><br>\nNov 14, 2025 As a natively omni-modal model, ERNIE 5.0 jointly models text, images, audio, and video s . Baidu announced upgrades to its digital human technology, no-code application builder Miaoda, and general AI agent GenFlow, while unveling the self-evolving agent Famou along with the one-stop AI workspace Oreate, with select offerings such as MeDo and the digital human technology rolling out to global markets.</p>\n<p><strong>ERNIE-4.5-VL-28B-A3B-Thinking: A Breakthrough in ...</strong><br>\n<a href=\"https://yiyan.baidu.com/blog/posts/ernie-4.5-vl-28b-a3b-thinking/\" target=\"_blank\">https://yiyan.baidu.com/blog/posts/ernie-4.5-vl-28b-a3b-thinking/</a><br>\n ERNIE-4.5-VL-28B-A3B-Thinking: A Breakthrough in Multimodal AI. November 11, 2025 · 7 min | Translations: 中文 · BAIDU AI Studio Demo | Hugging Face Demo ...</p>\n<p><strong>Baidu Unveils ERNIE 5.0 and a Series of AI Applications at ... - Nasdaq</strong><br>\n<a href=\"https://www.nasdaq.com/press-release/baidu-unveils-ernie-50-and-series-ai-applications-baidu-world-2025-ramps-global-push\" target=\"_blank\">https://www.nasdaq.com/press-release/baidu-unveils-ernie-50-and-series-ai-applications-baidu-world-2025-ramps-global-push</a><br>\n --Baidu, Inc., a leading AI company with a strong internet foundation, today unveiled the natively omni-modal foundation model, ERNIE 5.0, at its annual flagship event, Baidu World 2025.</p>\n<p><strong>Baidu Open Sources ERNIE 4.5: LLM Series Scaling... - MarkTechPost</strong><br>\n<a href=\"https://www.marktechpost.com/2025/07/01/baidu-open-sources-ernie-4-5-llm-series-scaling-from-0-3b-to-424b-parameters/\" target=\"_blank\">https://www.marktechpost.com/2025/07/01/baidu-open-sources-ernie-4-5-llm-series-scaling-from-0-3b-to-424b-parameters/</a><br>\nJul 01, 2025 - July 1, 2025. Baidu has officially open-sourced its latest ERNIE 4.5 series, a powerful family of foundation models designed for enhanced language understanding, reasoning, and generation. The release includes ten model variants ranging from compact 0.3B dense models to massive Mixture-of-Experts (MoE) architectures, with the largest variant totaling 424B parameters.</p>\n<p><strong>Announcing the Open Source Release of the ERNIE 4.5 ...</strong><br>\n<a href=\"https://ernie.baidu.com/blog/posts/ernie4.5/\" target=\"_blank\">https://ernie.baidu.com/blog/posts/ernie4.5/</a><br>\n Announcing the Open Source Release of the ERNIE 4.5 Model Family ... June 30, 2025 · 6 min | Translations: 中文 · ERNIE Bot | GitHub | Hugging ...</p>\n<p><strong>Baidu - ERNIE</strong><br>\n<a href=\"https://ernie.baidu.com/\" target=\"_blank\">https://ernie.baidu.com/</a><br>\n ERNIE is a conversational AI developed by Baidu, global technology leader from China. It's designed to understand complex questions, provide clear answers, and assist with learning, problem-solving, and communication.</p>\n<p><strong>Ernie Bot</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Ernie_Bot\" target=\"_blank\">https://en.wikipedia.org/wiki/Ernie_Bot</a><br>\nJan 27, 2019 Ernie Bot, full name Enhanced Representation through Knowledge Integration, is an artificial intelligence chatbot developed by the Chinese technology company Baidu. Ernie Bot rivals GPT models in Chinese NLP tasks. It is built on the company's ERNIE series of large language models, which have been in development since 2019. The service was first launched for invited testing on March 16, 2023, and was released to the general public on August 31, 2023, after...</p>\n<p><strong>Baidu - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Baidu\" target=\"_blank\">https://en.wikipedia.org/wiki/Baidu</a><br>\n On 16 March 2025, Baidu released two new artificial intelligence models: ERNIE 4.5, a foundation model, and ERNIE X1, a reasoning model. Baidu claimed that ERNIE X1 performs comparably to DeepSeek 's R1 model at half the price.</p>\n<p><strong>ERNIE Blog</strong><br>\n<a href=\"https://ernie.baidu.com/blog/\" target=\"_blank\">https://ernie.baidu.com/blog/</a><br>\nJul 03, 2025 Just now, LMArena released its latest rankings. Baidu’s ERNIE-5.0-Preview-1203 scored an impressive 1,451 points.We release ERNIE-4.5-VL-28B-A3B-Thinking, a multimodal reasoning model that achieves SOTA performance while activating only 3B parameters. November 11, 2025 · 7 min. ERNIE-5.0-Preview-1022, ready for testing in LMArena!</p>\n<p><strong>ByteDance Seed</strong><br>\n<a href=\"https://seed.bytedance.com/en/\" target=\"_blank\">https://seed.bytedance.com/en/</a><br>\nMay 20, 2024 The ByteDance Seed team was established in 2023, dedicated to discovering new approaches to general intelligence, pushing the boundaries of AI. The team's research areas include LLM, speech, vision, world models, infrastructure, AI Infra, and next-generation AI interactions. The team operates labs in China, Singapore, the United States, and other locations.</p>\n<p><strong>Seed1.8</strong><br>\n<a href=\"https://seed.bytedance.com/en/seed1_8\" target=\"_blank\">https://seed.bytedance.com/en/seed1_8</a><br>\nDec 18, 2025 Our team is releasing Seed1.8, a model designed for generalized real-world agency. It supports text and image inputs and with its powerful ...</p>\n<p><strong>Seed News - ByteDance Seed Team</strong><br>\n<a href=\"https://seed.bytedance.com/en/blog/the-doubao-visual-language-model-officially-released-with-general-model-capability-fully-comparable-to-gpt-4o\" target=\"_blank\">https://seed.bytedance.com/en/blog/the-doubao-visual-language-model-officially-released-with-general-model-capability-fully-comparable-to-gpt-4o</a><br>\nDec 26, 2024 Doubao Realtime Voice Model Is Available Upon Release! High EQ and IQ. Date. 2025-01-20.The Doubao Realtime Voice Model was officially launched today, and fully available on the Doubao APP. You can try to use it by upgrading the Doubao APP to version 7.2.0. The Doubao Realtime Voice Model is an integrated voice understanding and generation model that realizes end-to-end speech dialogues.</p>\n<p><strong>ByteDance’s Doubao reaches 100M DAU with minimal marketing spend</strong><br>\n<a href=\"https://technode.com/2025/12/25/bytedances-doubao-reaches-100m-dau-with-minimal-marketing-spend/\" target=\"_blank\">https://technode.com/2025/12/25/bytedances-doubao-reaches-100m-dau-with-minimal-marketing-spend/</a><br>\nDec 25, 2025 Doubao’s daily active users (DAU) have surpassed 100 million, according to people familiar with ByteDance’s internal data. Insiders say that Doubao’s user acquisition and marketing spend is the lowest among all ByteDance products that have ever reached 100 million DAU. Powered by AI features such as Seedream image generation and Seedance video generation, Doubao has leveraged organic traffic distribution across platforms including Douyin (TikTok) and Toutiao.</p>\n<p><strong>ByteDance denies reports of regulatory meeting over Doubao AI...</strong><br>\n<a href=\"https://technode.com/2025/12/15/bytedance-denies-reports-of-regulatory-meeting-over-doubao-ai-phone/\" target=\"_blank\">https://technode.com/2025/12/15/bytedance-denies-reports-of-regulatory-meeting-over-doubao-ai-phone/</a><br>\nDec 15, 2025 ByteDance’s large language model Doubao announced on December 1 the launch of a mobile assistant and a partnership with ZTE to introduce the “Doubao AI Phone,” the nubia M153. As a system-level mobile assistant with elevated permissions, Doubao is designed to operate smartphones in a human-like manner, aiming to reshape human–computer interaction.</p>\n<p><strong>ByteDance reportedly developing second-gen AI phone for 2026 after...</strong><br>\n<a href=\"https://technode.com/2025/12/04/bytedance-reportedly-developing-second-gen-ai-phone-for-2026-after-first-batch-sells-out/\" target=\"_blank\">https://technode.com/2025/12/04/bytedance-reportedly-developing-second-gen-ai-phone-for-2026-after-first-batch-sells-out/</a><br>\nDec 04, 2025 Following the rapid sell-out of its initial engineering batch, ByteDance has reportedly halted production of its first phone equipped with the Doubao AI assistant and is moving forward with a successor. Supply chain sources said the release was a market test limited to about 30,000 units. ByteDance and ZTE are reportedly developing a second version expected to ship by late 2026.</p>\n<p><strong>豆包:Advanced multimodal AI platform by ByteDance offering... - MOGE</strong><br>\n<a href=\"https://moge.ai/product/doubao\" target=\"_blank\">https://moge.ai/product/doubao</a><br>\nJun 02, 2025 Doubao is ByteDance's leading AI chatbot and large model platform, recognized as China's top AI assistant with over 50 million active users. It features cutting-edge large language models like Doubao-1.5-pro, which outperform industry leaders such as GPT-4o and Claude 3.5 in knowledge, reasoning, and code generation benchmarks. Doubao supports multimodal inputs and outputs, including text, images, audio, and video, enabling versatile content creation and intelligent interaction.</p>\n<p><strong>ByteDance's Overseas AI Assistant Dola Surpasses 10 ...</strong><br>\n<a href=\"https://pandaily.com/byte-dance-s-overseas-ai-assistant-dola-surpasses-10-million-daily-active-users\" target=\"_blank\">https://pandaily.com/byte-dance-s-overseas-ai-assistant-dola-surpasses-10-million-daily-active-users</a><br>\nDec 31, 2025 On December 31, 2025, ByteDance announced that its overseas AI assistant app Dola has exceeded 10 million daily active users (DAU). The app ...</p>\n<p><strong>ByteDance's Doubao: A Strategic Masterstroke in AI-Driven Consumer ...</strong><br>\n<a href=\"https://www.ainvest.com/news/bytedance-doubao-strategic-masterstroke-ai-driven-consumer-engagement-2512/\" target=\"_blank\">https://www.ainvest.com/news/bytedance-doubao-strategic-masterstroke-ai-driven-consumer-engagement-2512/</a><br>\n ByteDance's Doubao has emerged as a defining force in AI-driven consumer engagement, leveraging hyper-personalization, ecosystem integration, and enterprise scalability to dominate China's AI market. With 172 million monthly active users (MAUs) as of September 2025 and</p>\n<p><strong>ByteDance's $14 Billion Bet: A Structural Shift in the Global ...</strong><br>\n<a href=\"https://www.ainvest.com/news/bytedance-14-billion-bet-structural-shift-global-ai-compute-race-2512/\" target=\"_blank\">https://www.ainvest.com/news/bytedance-14-billion-bet-structural-shift-global-ai-compute-race-2512/</a><br>\nDec 31, 2025 - ByteDance plans to boost AI infrastructure spending to meet explosive demand, with 50% allocated to advanced semiconductors. - The company's ...</p>\n<p><strong>Cohere Command Models: AI-Powered Solutions for ...</strong><br>\n<a href=\"https://cohere.com/command\" target=\"_blank\">https://cohere.com/command</a><br>\nJun 07, 2023 The Command family empowers your team with AI purpose-built for real-world agentic applications anchored in your data, so you can focus on the meaningful work that truly matters.AI moves fast. We’ll keep you up to date with the latest. Enter your business email below to receive updates from Cohere. Please refer to our privacy policy for details or to contact us. You can unsubscribe at any time.</p>\n<p><strong>Introducing Command R+: A Scalable LLM Built for... | Cohere Blog</strong><br>\n<a href=\"https://cohere.com/blog/command-r-plus-microsoft-azure\" target=\"_blank\">https://cohere.com/blog/command-r-plus-microsoft-azure</a><br>\nApr 04, 2024 Command R+ will also be available immediately on Cohere’s hosted API. “Enterprises are clearly looking for highly accurate and efficient AI models like Cohere’s latest Command R+ to move into production,” said Miranda Nash, group vice president, Applications Development & Strategy, Oracle.Cohere Team — Dec 11, 2025. Introducing Rerank 4: Cohere’s most powerful reranker yet.</p>\n<p><strong>Enterprise AI: Private, Secure, Customizable | Cohere</strong><br>\n<a href=\"https://cohere.com/\" target=\"_blank\">https://cohere.com/</a><br>\n Cohere builds powerful models and AI solutions enabling enterprises to automate processes, empower employees, and turn fragmented data into actionable insights.</p>\n<p><strong>Announcing Major Command Deprecations | Cohere</strong><br>\n<a href=\"https://docs.cohere.com/changelog/2025-09-15-major-command-deprecations\" target=\"_blank\">https://docs.cohere.com/changelog/2025-09-15-major-command-deprecations</a><br>\nSep 16, 2025 For command model replacements, we recommend you use command-r-08-2024, command-r-plus-08-2024, or command-a-03-2025 (which is the strongest-performing model across domains) instead.</p>\n<p><strong>Cohere's Command R Model | Cohere</strong><br>\n<a href=\"https://docs.cohere.com/docs/command-r\" target=\"_blank\">https://docs.cohere.com/docs/command-r</a><br>\nMar 11, 2024 Command R boasts high precision on retrieval augmented generation (RAG) and tool use tasks, low latency and high throughput, a long 128,000-token context length, and strong capabilities across 10 key languages. For information on toxicity, safety, and using this model responsibly check out our Command model card.Cohere’s flagship text-generation models, Command R and Command R+, received a substantial update in August 2024.</p>\n<p><strong>Cohere’s Command R+ Model</strong><br>\n<a href=\"https://docs.cohere.com/docs/command-r-plus\" target=\"_blank\">https://docs.cohere.com/docs/command-r-plus</a><br>\n For tool use, Command R and Command R+ have demonstrated improved decision-making around whether or not to use a tool. The updated models are better able to follow instructions included in the request’s system message.</p>\n<p><strong>Deprecations</strong><br>\n<a href=\"https://docs.cohere.com/docs/deprecations\" target=\"_blank\">https://docs.cohere.com/docs/deprecations</a><br>\n Models fine-tuned with Command-R-03-2024 will continue to be supported until March 08, 2025. After this date, all requests to these models will return an error.</p>\n<p><strong>An Overview of Cohere's Models</strong><br>\n<a href=\"https://docs.cohere.com/docs/models\" target=\"_blank\">https://docs.cohere.com/docs/models</a><br>\n Command R is an instruction-following conversational model that performs language tasks at a higher quality, more reliably, and with a longer context than previous models. It can be used for complex workflows like code generation, retrieval augmented generation (RAG), tool use, and agents.</p>\n<p><strong>Cohere Command R+ (Retired) - docs.oracle.com</strong><br>\n<a href=\"https://docs.oracle.com/en-us/iaas/Content/generative-ai/cohere-command-r-plus.htm\" target=\"_blank\">https://docs.oracle.com/en-us/iaas/Content/generative-ai/cohere-command-r-plus.htm</a><br>\nMay 27, 2025 In summary, Command R is an excellent choice for those looking for a more affordable and flexible option for general language tasks. On the other hand, Command R+ is designed for power users who require advanced language understanding, higher capacity, and more nuanced responses. The choice between the two would depend on the specific requirements and budget of your application.Dedicated Mode Retirement Date. cohere.command-r-plus. 2024-06-18. 2025-01-16. 2025-08-07. Important. For a list of all model time lines and retirement details, see Retiring the Models.</p>\n<p><strong>Release Notes - Cohere</strong><br>\n<a href=\"https://docs.cohere.com/changelog\" target=\"_blank\">https://docs.cohere.com/changelog</a><br>\n We’re excited to announce the release of Command A Translate, Cohere’s first machine translation model. It achieves state-of-the-art performance at producing accurate, fluent translations across 23 languages. The model is available immediately through Cohere’s Chat API endpoint.</p>\n<p><strong>Release Notes</strong><br>\n<a href=\"https://platform.stability.ai/docs/release-notes\" target=\"_blank\">https://platform.stability.ai/docs/release-notes</a><br>\nNov 27, 2022 The Stability AI API has a range of additional image generation offerings to explore: Stable Image Core: Optimized for fast and affordable image generation, Stable Image Core is the next-generation model following SDXL.Stable Diffusion 3.5 model family: Stability AI's latest base models. While Stable Diffusion Video will no longer be available via API, you can still deploy the model in your environment with a Self-Hosted License. Our team is happy to help if you need support. API Deprecation Notice. April 17, 2025.</p>\n<p><strong>Product — News</strong><br>\n<a href=\"https://stability.ai/news/category/Product\" target=\"_blank\">https://stability.ai/news/category/Product</a><br>\n We are excited to introduce Stable Fast 3D, Stability AI’s latest breakthrough in 3D asset generation technology. This innovative model transforms a single input image into a detailed 3D asset, setting a new standard for speed and quality in the field of 3D reconstruction.</p>\n<p><strong>News — Stability AI</strong><br>\n<a href=\"https://stability.ai/news\" target=\"_blank\">https://stability.ai/news</a><br>\nOct 17, 2022 Stability AI and Electronic Arts (EA) have formed a strategic partnership to co-develop transformative generative AI models, tools, and workflows that empower EA’s artists, designers, and developers to reimagine how games are made.We’re excited to release Stable Audio 2.5, our latest audio model and the first developed for enterprise-grade use cases. Stable Audio 2.5 introduces advancements in quality and control that address the demand for dynamic compositions that can be adapted for custom brand needs.</p>\n<p><strong>Stability AI releases its Image Upscaling API — Stability AI</strong><br>\n<a href=\"https://stability.ai/news/stability-ai-releases-image-upscaling-api\" target=\"_blank\">https://stability.ai/news/stability-ai-releases-image-upscaling-api</a><br>\nApr 26, 2023 Today Stability AI announces the release of its Image Upscaling API, an AI-powered tool that increases the size of any image without compromising its sharpness.These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.</p>\n<p><strong>Stability AI</strong><br>\n<a href=\"https://stability.ai/\" target=\"_blank\">https://stability.ai/</a><br>\n Multimodal media generation and editing tools designed for the best in the business. No creative challenge too big, no timeline too tight.</p>\n<p><strong>Introducing Stable Diffusion 3.5</strong><br>\n<a href=\"https://stability.ai/news/introducing-stable-diffusion-3-5\" target=\"_blank\">https://stability.ai/news/introducing-stable-diffusion-3-5</a><br>\nOct 22, 2024 Today we are releasing Stable Diffusion 3.5, our most powerful models yet. This open release includes multiple variants that are customizable, ...</p>\n<p><strong>Research Blog — Stability AI</strong><br>\n<a href=\"https://stability.ai/research\" target=\"_blank\">https://stability.ai/research</a><br>\n Our community of open source research hubs has over 200,000 members building the future of AI. We are working globally with our partners, industry leaders, and experts to develop cutting-edge open AI models for Image, Language, Audio, Video, 3D, Biology and more.</p>\n<p><strong>Stability AI and Arm Collaborate to Release Stable Audio Open Small ...</strong><br>\n<a href=\"https://stability.ai/news/stability-ai-and-arm-release-stable-audio-open-small-enabling-real-world-deployment-for-on-device-audio-control\" target=\"_blank\">https://stability.ai/news/stability-ai-and-arm-release-stable-audio-open-small-enabling-real-world-deployment-for-on-device-audio-control</a><br>\n This release follows our previously announced breakthrough that Stable Audio Open is now optimized to run on Arm CPUs, powered by Arm KleidiAI to enable AI-generated audio on a mobile phone.</p>\n<p><strong>Stability AI</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Stability_AI\" target=\"_blank\">https://en.wikipedia.org/wiki/Stability_AI</a><br>\n In November 2025, the High Court of England and Wales ruled that Stability AI was not guilty of copyright infringement despite using Getty Images as a training ...</p>\n<p><strong>AI Model Releases in 2025: The Roundup of AI Launches</strong><br>\n<a href=\"https://www.timesofai.com/industry-insights/roundup-of-ai-model-releases-in-2025/\" target=\"_blank\">https://www.timesofai.com/industry-insights/roundup-of-ai-model-releases-in-2025/</a><br>\n Explore a clear, expert roundup of the most important AI Model Releases in 2025, comparing features and real-world impact across leading platforms</p>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>OpenAI 4o image generation March 2025</li>\n<li>Anthropic MCP Model Context Protocol 2025</li>\n<li>Google Flow video tool 2025</li>\n<li>Runway Gen-4 2025</li>\n<li>Midjourney V7 2025</li></ul></summary><br>\nSources (50):<br>\n<strong>Introducing 4o Image Generation</strong><br>\n<a href=\"https://openai.com/index/introducing-4o-image-generation/\" target=\"_blank\">https://openai.com/index/introducing-4o-image-generation/</a><br>\nMar 25, 2025 At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o.Multi-turn generation. Because image generation is now native to GPT‑4o, you can refine images through natural conversation. GPT‑4o can build upon images and text in chat context, ensuring consistency throughout.\n\n<p><strong>Addendum to GPT-4o System Card: 4o image generation - OpenAI</strong><br>\n<a href=\"https://openai.com/index/gpt-4o-image-generation-system-card-addendum/\" target=\"_blank\">https://openai.com/index/gpt-4o-image-generation-system-card-addendum/</a><br>\nMar 25, 2025 OpenAI. March 25, 2025.4o image generation benefits from our existing safety infrastructure, and from lessons we have learned deploying DALL·E and Sora. At the same time, these new capabilities also bring some new risks. This addendum to the GPT‑4o system card describes the marginal risks we’ve focused on, and the work we have done to address them.</p>\n<p><strong>The new ChatGPT Images is here</strong><br>\n<a href=\"https://openai.com/index/new-chatgpt-images-is-here/\" target=\"_blank\">https://openai.com/index/new-chatgpt-images-is-here/</a><br>\nDec 16, 2025 The new ChatGPT Images is powered by our flagship image generation model, delivering more precise edits, consistent details, and image generation up to 4× faster. The upgraded model is rolling out to all ChatGPT users today and is also available in the API as GPT-Image-1.5.</p>\n<p><strong>Introducing our latest image generation model in the API</strong><br>\n<a href=\"https://openai.com/index/image-generation-api/\" target=\"_blank\">https://openai.com/index/image-generation-api/</a><br>\nApr 23, 2025 The model's versatility allows it to create images across diverse styles, faithfully follow custom guidelines, leverage world knowledge, and accurately render ...</p>\n<p><strong>OpenAI unveiled image generation for... | TechRadar</strong><br>\n<a href=\"https://www.techradar.com/news/live/openai-march-25-livestream-event\" target=\"_blank\">https://www.techradar.com/news/live/openai-march-25-livestream-event</a><br>\nMar 25, 2025 OpenAI debuts native image generation in the 4o model. Sam Altman and the OpenAI team during the March 25, 2025 live stream.Sam Altman was then quick to note that the improved image generation is starting to roll out now in ChatGPT and Sora for Pro users, and it will be available for free users as well. We also are seeing the process of the native image generation model within the 4o model, turning that generated selfie into an \"AGI meme.\"</p>\n<p><strong>OpenAI’s new AI image generator is potent and bound to provoke</strong><br>\n<a href=\"https://arstechnica.com/ai/2025/03/openais-new-ai-image-generator-is-potent-and-bound-to-provoke/\" target=\"_blank\">https://arstechnica.com/ai/2025/03/openais-new-ai-image-generator-is-potent-and-bound-to-provoke/</a><br>\nMar 01, 2025 An AI-generated image of “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting” created by OpenAI’s 4o Image Generation model.OpenAI / Benj Edwards. Some users on social media initially reported confusion since there’s no UI indication of which image generator is active, but you’ll know it’s the new model if the generation is ultra slow and proceeds from top to bottom. The previous DALL-E model remains available through a dedicated “DALL-E GPT” interface, while API access to GPT-4o image generation is expected within weeks.</p>\n<p><strong>OpenAI’s new ChatGPT image generator makes faking photos easy</strong><br>\n<a href=\"https://arstechnica.com/ai/2025/12/openais-new-chatgpt-image-generator-makes-faking-photos-easy/\" target=\"_blank\">https://arstechnica.com/ai/2025/12/openais-new-chatgpt-image-generator-makes-faking-photos-easy/</a><br>\nDec 01, 2025 While OpenAI had a conversational image-editing model in the works since GPT-4o in 2024, Google beat OpenAI to market in March with a public prototype, then refined it to a popular model called ...</p>\n<p><strong>OpenAI launches GPT-4o image generator; set to roll out... - The Hindu</strong><br>\n<a href=\"https://www.thehindu.com/sci-tech/technology/openai-launches-image-generation-within-chatgpt/article69375816.ece\" target=\"_blank\">https://www.thehindu.com/sci-tech/technology/openai-launches-image-generation-within-chatgpt/article69375816.ece</a><br>\nMar 26, 2025 OpenAI has now brought image generation to ChatGPT itself, or GPT4o, to be specific. The AI company backed by Microsoft shared one promotional image that it said was generated with 4o, where it pointed out the accurate, distortion-free text on a picture of a boarding pass. “Creating and customizing images is as simple as chatting using GPT‑4o—just describe what you need, including any specifics like aspect ratio, exact colors using hex codes, or a transparent background,” posted OpenAI on X on March 26.</p>\n<p><strong>Addendum to GPT-4o System Card: Native image generation</strong><br>\n<a href=\"https://cdn.openai.com/11998be9-5319-4302-bfbf-1167e093f1fb/Native_Image_Generation_System_Card.pdf\" target=\"_blank\">https://cdn.openai.com/11998be9-5319-4302-bfbf-1167e093f1fb/Native_Image_Generation_System_Card.pdf</a><br>\n 4o image generation is a new, significantly more capable image generation approach than our earlier DALL·E series of models. It can create photorealistic output. It can take images as inputs and transform them. It can follow detailed instructions, including reliably incorporating text into images.</p>\n<p><strong>OpenAI’s new image generator is now available to all users</strong><br>\n<a href=\"https://techcrunch.com/2025/03/31/openais-new-image-generator-is-now-available-to-all-users/\" target=\"_blank\">https://techcrunch.com/2025/03/31/openais-new-image-generator-is-now-available-to-all-users/</a><br>\nMar 31, 2025 OpenAI’s image-generation tool took off instantly after launch, with Altman saying the demand was so high, the company’s GPUs were “melting.” The tool quickly also gained notoriety for being used to convert pictures into the style of Japanese animation firm Studio Ghibli, raising concerns around copyright and training data used by the company, given the similarity in style. Some people also used it to generate fake receipts, such as restaurant bills.</p>\n<p><strong>Introducing the Model Context Protocol</strong><br>\n<a href=\"https://www.anthropic.com/news/model-context-protocol\" target=\"_blank\">https://www.anthropic.com/news/model-context-protocol</a><br>\nNov 25, 2024 The Model Context Protocol (MCP) is an open standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments.“Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration. We are excited to partner on a protocol and use it to build agentic systems, which remove the burden of the mechanical so people can focus on the creative.”</p>\n<p><strong>Code execution with MCP: building more efficient AI agents \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/engineering/code-execution-with-mcp\" target=\"_blank\">https://www.anthropic.com/engineering/code-execution-with-mcp</a><br>\nNov 05, 2025 The Model Context Protocol (MCP) is an open standard for connecting AI agents to external systems. Connecting agents to tools and data traditionally requires a custom integration for each pairing, creating fragmentation and duplicated effort that makes it difficult to scale truly connected systems. MCP provides a universal protocol—developers implement MCP once in their agent and it unlocks an entire ecosystem of integrations.</p>\n<p><strong>Donating the Model Context Protocol and establishing ...</strong><br>\n<a href=\"https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation\" target=\"_blank\">https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation</a><br>\nDec 09, 2025 The Model Context Protocol's governance model will remain unchanged: the project's maintainers will continue to prioritize community input and ...</p>\n<p><strong>Building agents with the Claude Agent SDK \\ Anthropic</strong><br>\n<a href=\"https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk\" target=\"_blank\">https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk</a><br>\n The Model Context Protocol (MCP) provides standardized integrations to external services, handling authentication and API calls automatically. This means you can connect your agent to tools like Slack, GitHub, Google Drive, or Asana without writing custom integration code or managing OAuth flows yourself.</p>\n<p><strong>Roadmap - Model Context Protocol</strong><br>\n<a href=\"https://modelcontextprotocol.io/development/roadmap\" target=\"_blank\">https://modelcontextprotocol.io/development/roadmap</a><br>\n The Model Context Protocol is rapidly evolving. This page outlines our priorities for the next release on November 25th, 2025, with a release candidate available on November 11th, 2025. To see what’s changing in the upcoming release, check out the specification changelog.</p>\n<p><strong>Model Context Protocol - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Model_Context_Protocol\" target=\"_blank\">https://en.wikipedia.org/wiki/Model_Context_Protocol</a><br>\nMar 27, 2025 The Model Context Protocol (MCP) is an open standard and open-source framework introduced by Anthropic in November 2024 to standardize the way artificial intelligence (AI) systems like large language models (LLMs) integrate and share data with external tools, systems, and data sources.[1] MCP provides a universal interface for reading files, executing functions, and handling contextual prompts.[2] Following its announcement, the.</p>\n<p><strong>Anthropic - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Anthropic\" target=\"_blank\">https://en.wikipedia.org/wiki/Anthropic</a><br>\nDec 26, 2008 In November 2025, Nvidia, Microsoft and Anthropic announced a partnership deal. NVIDIA and Microsoft were expected to invest up to $15 billion in Anthropic, and in turn, Anthropic said it would buy $30 billion of computing capacity from Microsoft Azure running on Nvidia AI systems.[33]. In November 2025, Anthropic said that hackers sponsored by the Chinese government used Claude to perform automated cyberattacks against around 30 global organisations.</p>\n<p><strong>What is the Model Context Protocol (MCP)? - Model Context ...</strong><br>\n<a href=\"https://modelcontextprotocol.io/\" target=\"_blank\">https://modelcontextprotocol.io/</a><br>\n MCP (Model Context Protocol) is an open-source standard for connecting AI applications to external systems. Using MCP, AI applications like Claude or ...</p>\n<p><strong>Specification</strong><br>\n<a href=\"https://modelcontextprotocol.io/specification/2025-03-26\" target=\"_blank\">https://modelcontextprotocol.io/specification/2025-03-26</a><br>\nNov 25, 2025 Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need. This specification defines the authoritative protocol requirements, based on the TypeScript schema in schema.ts.</p>\n<p>**arXiv:2504.03767v2 <a href=\"https://arxiv.org/pdf/2504.03767\" target=\"_blank\">https://arxiv.org/pdf/2504.03767</a>iv.org/pdf/2504.03767)<br>\n Leidos <a href=\"mailto:halloranjt@leidos.com\">halloranjt@leidos.com</a> Abstract To reduce development overhead and enable seamless integration between potential components comprising any given generative AI application, the Model Context Protocol (MCP) (Anthropic, 2025d) has recently been rele.</p>\n<p><strong>Flow</strong><br>\n<a href=\"https://labs.google/flow/about\" target=\"_blank\">https://labs.google/flow/about</a><br>\nFeb 22, 2025 Flow is an AI filmmaking tool built with and for creatives.Bring your own assets, or generate them in Flow. Then easily manage and reference them as you start to generate clips. An interface designed for the creative story-building process from ideation to iteration. State-of-the-art video quality made possible by Google DeepMind’s most advanced models.</p>\n<p><strong>Introducing Flow: Google’s AI filmmaking tool designed for Veo</strong><br>\n<a href=\"https://blog.google/technology/ai/google-flow-veo-ai-filmmaking-tool/\" target=\"_blank\">https://blog.google/technology/ai/google-flow-veo-ai-filmmaking-tool/</a><br>\nMay 20, 2025 Today we’re introducing Flow, our new AI filmmaking tool. It’s built by and for creatives, and it’s the only AI filmmaking tool custom-designed for Google’s most advanced models — Veo, Imagen and Gemini. Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories. It’s early days, and we’re excited to shape the future of Flow with creatives and filmmakers.</p>\n<p><strong>Bringing new Veo 3.1 updates into Flow to edit AI video</strong><br>\n<a href=\"https://blog.google/technology/ai/veo-updates-flow/\" target=\"_blank\">https://blog.google/technology/ai/veo-updates-flow/</a><br>\nOct 15, 2025 With \"Ingredients to Video,\" you can use multiple reference images to control the characters, objects and style. Flow uses your ingredients to create a final scene that looks just as you envisioned. Control the shot from start to finish. Provide a starting and ending image with “Frames to Video,” and Flow will generate a seamless video that bridges the two, perfect for artful and epic transitions.</p>\n<p><strong>Google 2025 recap: Research breakthroughs of the year</strong><br>\n<a href=\"https://blog.google/technology/ai/2025-research-breakthroughs/\" target=\"_blank\">https://blog.google/technology/ai/2025-research-breakthroughs/</a><br>\n In 2025, Google made significant AI research breakthroughs with models like Gemini 3 and Gemma 3. These advancements improved AI's reasoning, multimodality, and efficiency, leading to new products and features across Google's portfolio. Expect more AI-driven innovations in science, computing, and tools for global challenges as Google prioritizes responsible AI development and collaboration.</p>\n<p><strong>5 tips for using Flow, Google’s AI filmmaking tool</strong><br>\n<a href=\"https://blog.google/technology/ai/flow-video-tips/\" target=\"_blank\">https://blog.google/technology/ai/flow-video-tips/</a><br>\nJun 26, 2025 Alternative styles: Flow is not limited to realistic visual styles. You can explore a wide array of animation styles to match your story's tone. Experiment with prompts that specify aesthetics like \"stop motion,\" \"knitted animation\" or \"clay animation.\" Audio and dialogue: While still an experimental feature, you can generate audio with your video by selecting Veo 3 in the model picker. You can then prompt the model to create ambient noise, specific sound effects, or even generate dialogue by including it in your prompt, optionally specifying details like tone, emotion, or accents.</p>\n<p><strong>Introducing Flow, Google's new AI video tool and Sora competitor</strong><br>\n<a href=\"https://mashable.com/article/google-introduces-ai-video-tool-flow-veo-3\" target=\"_blank\">https://mashable.com/article/google-introduces-ai-video-tool-flow-veo-3</a><br>\n At the Google I/O 2025 keynote event on May 20, Google unveiled a new suite of AI video tools, powered by state-of-the-art models. The offspring of media models Veo 3 and Imagen 4, Flow...</p>\n<p><strong>Google Flow: The AI Tool That Makes Pro Video Creation ...</strong><br>\n<a href=\"https://www.imagine.art/blogs/google-flow-overview\" target=\"_blank\">https://www.imagine.art/blogs/google-flow-overview</a><br>\nJun 03, 2025 At Google I/O 2025, Google introduced Flow, an innovative AI filmmaking tool designed to empower creators to produce cinematic-quality videos using simple text prompts. Google's advanced AI models, including Veo 3.1, Nano Banana Pro, Imagen 4, and Gemini, are seamlessly integrated into Flow. This enables users to craft detailed scenes with characters, dialogue, and dynamic camera movements without the need for traditional filmmaking expertise.</p>\n<p><strong>Google debuts an AI-powered video tool called Flow - TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/05/20/google-debuts-an-ai-powered-video-tool-called-flow/\" target=\"_blank\">https://techcrunch.com/2025/05/20/google-debuts-an-ai-powered-video-tool-called-flow/</a><br>\nMay 20, 2025 At its Google I/O 2025 developer conference on Tuesday, Google announced Flow, a new AI-powered video tool geared toward filmmaking. The company said it’s using a trio of its AI models — Veo for...</p>\n<p><strong>Google launches new AI video generation tool 'Flow', featuring Veo 3 ...</strong><br>\n<a href=\"https://thetechportal.com/2025/05/21/google-launches-new-ai-video-generation-tool-flow-featuring-veo-3-and-imagen-4-capabilities/\" target=\"_blank\">https://thetechportal.com/2025/05/21/google-launches-new-ai-video-generation-tool-flow-featuring-veo-3-and-imagen-4-capabilities/</a><br>\nMay 21, 2025 Google has introduced an innovative AI-powered video creation tool called 'Flow', unveiled at the Google I/O 2025 conference.</p>\n<p><strong>Gemini AI video generator powered by Veo 3.1</strong><br>\n<a href=\"https://gemini.google/overview/video-generation/\" target=\"_blank\">https://gemini.google/overview/video-generation/</a><br>\n Create high-quality, 8-second videos with Veo 3.1, our latest AI video generator. Simply describe what you have in mind or upload a photo and watch your ...</p>\n<p><strong>Introducing Runway Gen-4</strong><br>\n<a href=\"https://runwayml.com/research/introducing-runway-gen-4\" target=\"_blank\">https://runwayml.com/research/introducing-runway-gen-4</a><br>\nMar 31, 2025 Introducing Runway Gen-4. Our next-generation series of AI models for media generation and world consistency.A new generation of consistent and controllable media is here. With Gen-4, you are now able to precisely generate consistent characters, locations and objects across scenes. Simply set your look and feel and the model will maintain coherent world environments while preserving the distinctive style, mood and cinematographic elements of each frame.</p>\n<p><strong>Runway Research | Introducing Runway Gen-4.5</strong><br>\n<a href=\"https://runwayml.com/research/introducing-runway-gen-4.5\" target=\"_blank\">https://runwayml.com/research/introducing-runway-gen-4.5</a><br>\nDec 01, 2025 Introducing Runway Gen-4.5. Two years ago, we released Gen-1, the first publicly available video generation model. It enabled an entirely new form of creative expression and a new product category. Since then, we've led the industry in making video models more powerful and controllable, from significant base model improvements to new controls and general in-context capabilities. Runway Gen-4.5 pushes the frontier of video generation even further.</p>\n<p><strong>Runway | Building AI to Simulate the World</strong><br>\n<a href=\"https://runwayml.com/\" target=\"_blank\">https://runwayml.com/</a><br>\n Gen-4.5: A new frontier for video generation. Runway Gen-4.5 is the world's top-rated video model, offering unprecedented visual fidelity and creative control.</p>\n<p><strong>Runway | AI Image and Video Generator</strong><br>\n<a href=\"https://runwayml.com/product\" target=\"_blank\">https://runwayml.com/product</a><br>\nJun 17, 2020 Introducing Gen-4.5A new frontier for video generation.How the world's leading organizations and artists are using Runway. Runway is helping change the way work gets made across all industries.</p>\n<p><strong>Worlds of Gen-4</strong><br>\n<a href=\"https://runwayml.com/worlds-of-gen-4\" target=\"_blank\">https://runwayml.com/worlds-of-gen-4</a><br>\n Explore our most advanced base model for image generation. With unprecedented stylistic control and visual fidelity, Gen-4 allows you to define worlds.</p>\n<p><strong>Behind the Scenes with Gen-4</strong><br>\n<a href=\"https://runwayml.com/gen-4-bts\" target=\"_blank\">https://runwayml.com/gen-4-bts</a><br>\n A collection of short films and music videos made entirely with Gen-4 to test the model's narrative capabilities. A new way to make media and tell stories.</p>\n<p><strong>AI Image and Video Pricing from $12/month</strong><br>\n<a href=\"https://runwayml.com/pricing\" target=\"_blank\">https://runwayml.com/pricing</a><br>\n Compare Runway pricing plans for AI video generation. Choose from Free, Standard, Pro and Unlimited plans with flexible credits and features.</p>\n<p><strong>Product Updates & Changelog | Runway AI</strong><br>\n<a href=\"https://runwayml.com/changelog\" target=\"_blank\">https://runwayml.com/changelog</a><br>\n View Runway's latest product updates and feature releases. Track changes, improvements and new AI video generation capabilities in our changelog.</p>\n<p><strong>With new Gen-4 model, Runway claims to have finally achieved ...</strong><br>\n<a href=\"https://arstechnica.com/ai/2025/03/with-new-gen-4-model-runway-claims-to-have-finally-achieved-consistency-in-ai-videos/\" target=\"_blank\">https://arstechnica.com/ai/2025/03/with-new-gen-4-model-runway-claims-to-have-finally-achieved-consistency-in-ai-videos/</a><br>\nMar 01, 2025 AI video startup Runway announced the availability of its newest video synthesis model today. Dubbed Gen-4, the model purports to solve several key problems with AI video generation. Chief...</p>\n<p><strong>Runway Gen-4: AI Video Upgrade with Consistent Characters</strong><br>\n<a href=\"https://wowlabz.com/runway-gen-4-ai/\" target=\"_blank\">https://wowlabz.com/runway-gen-4-ai/</a><br>\n On March 31, 2025, Runway introduced Gen-4, a revolutionary tool that takes AI video creation to a whole new level. This isn’t just another update—it’s a major breakthrough that tackles common video-making problems and opens up exciting new possibilities for creators.</p>\n<p><strong>Midjourney</strong><br>\n<a href=\"https://www.midjourney.com/\" target=\"_blank\">https://www.midjourney.com/</a><br>\nJun 02, 2014 Over the coming months, we’re unveiling a wide range of ambitious projects under the themes of imagination, coordination, reflection, beauty, and human flourishing. We hope our work will help tell stories of a humane future that we all want to be a part of, and convince you that we aren’t at the end of time, or the beginning, but that we are all midjourney in a vast and great adventure. Eye. Image and Video Models.</p>\n<p><strong>Updates</strong><br>\n<a href=\"https://www.midjourney.com/updates\" target=\"_blank\">https://www.midjourney.com/updates</a><br>\n Latest Updates. All Announcements Changelog. Nothing to see here yet.</p>\n<p><strong>Explore</strong><br>\n<a href=\"https://www.midjourney.com/explore\" target=\"_blank\">https://www.midjourney.com/explore</a><br>\n Feed of the latest Midjourney generations.</p>\n<p><strong>Midjourney V7 Released - by Gareth B. Davies</strong><br>\n<a href=\"https://midjourney.substack.com/p/midjourney-v7-released\" target=\"_blank\">https://midjourney.substack.com/p/midjourney-v7-released</a><br>\n Direct from the Midjourney team: After months of testing V7 and the release of Omni-reference and Style Reference V7 - we're officially switching the default model to V7! This means if you haven't tried V7 yet, your model will be automatically switched to V7.</p>\n<p><strong>Midjourney releases V7, its first new AI image model in... | TechCrunch</strong><br>\n<a href=\"https://techcrunch.com/2025/04/03/midjourney-releases-its-first-new-ai-image-model-in-nearly-a-year/\" target=\"_blank\">https://techcrunch.com/2025/04/03/midjourney-releases-its-first-new-ai-image-model-in-nearly-a-year/</a><br>\nApr 04, 2025 Midjourney, one of the earliest AI image-generating services on the web, has released its first new AI image model in nearly a year. Dubbed V7, the model began rolling out in alpha around midnight ET on Thursday, comes a week after OpenAI debuted a new image generator in ChatGPT that quickly went viral for its ability to create Ghibli-style photos.— Midjourney (@midjourney) April 4, 2025. To use it, you’ll first have to rate around 200 images to build a Midjourney “personalization” profile, if you haven’t already.</p>\n<p><strong>Version</strong><br>\n<a href=\"https://docs.midjourney.com/hc/en-us/articles/32199405667853-Version\" target=\"_blank\">https://docs.midjourney.com/hc/en-us/articles/32199405667853-Version</a><br>\n Version 7 was released on April 3, 2025, and became the default model on June 17, 2025. In version 7 text and image prompts are handled with stunning precision, while image quality shines with richer textures and more coherent details—especially in bodies, hands, and objects.</p>\n<p><strong>Personalization – Midjourney</strong><br>\n<a href=\"https://docs.midjourney.com/hc/en-us/articles/32433330574221-Personalization\" target=\"_blank\">https://docs.midjourney.com/hc/en-us/articles/32433330574221-Personalization</a><br>\nJun 12, 2024 What is Personalization? Personalization acts like a style assistant for your image creations. By liking or ranking images, Midjourney gets to know what kind of images you love and uses that information to generate images tailored to your personal tastes and unique style. Personalization is compatible with Midjourney versions 6 and 7.</p>\n<p><strong>Midjourney 2025: V7 Timeline and Video Features - AI Tools</strong><br>\n<a href=\"https://www.godofprompt.ai/blog/midjourney-2025-v7-timeline-and-video-features\" target=\"_blank\">https://www.godofprompt.ai/blog/midjourney-2025-v7-timeline-and-video-features</a><br>\nMay 13, 2025 Midjourney Version 7 is here, and it's changing the game for AI-generated content in 2025. With a rebuilt system, text-to-video tools, and enhanced 3D capabilities, V7 offers faster processing, better photorealism, and more precise prompt interpretation. Here's what you need to know: Release Timeline : Rolled out between January and February 2025.</p>\n<p><strong>Midjourney Update: February 2025 New Features, V7 Timeline & Video/3D ...</strong><br>\n<a href=\"https://generativeai.pub/midjourney-update-february-2025-new-features-v7-timeline-video-3d-updates-f3afcc995703\" target=\"_blank\">https://generativeai.pub/midjourney-update-february-2025-new-features-v7-timeline-video-3d-updates-f3afcc995703</a><br>\n Midjourney V7 was initially set to launch earlier but has been delayed by about a week due to last-minute issues. The new expected release window is mid-to-late February 2025.</p>\n<p><strong>Midjourney V7: Midjourney’s First New Model in Nearly a Year</strong><br>\n<a href=\"https://theaitrack.com/midjourney-v7-launch/\" target=\"_blank\">https://theaitrack.com/midjourney-v7-launch/</a><br>\n Released on April 4, 2025, Midjourney V7 is Midjourney’s first new image model in nearly a year. The alpha rollout closely followed OpenAI’s GPT-4o image tools and quickly went viral on social media, fueling direct comparisons across platforms like TechCrunch, Engadget, VentureBeat, and Ars Technica.</p>\n</details>\n<details><summary>Searched with Kagi<p></p>\n<ul>\n<li>Cohere Command A March 2025</li>\n<li>Gemini 2.5 Pro May 2025</li>\n<li>Qwen3 April 2025 release date</li>\n<li>Nano Banana Google image model 2025</li>\n<li>Gemma 3n June 2025</li></ul></summary><br>\nSources (50):<br>\n<strong>Command A: An Enterprise-Ready Large Language Model</strong><br>\n<a href=\"https://cohere.com/research/papers/command-a-technical-report.pdf\" target=\"_blank\">https://cohere.com/research/papers/command-a-technical-report.pdf</a><br>\nMar 28, 2025 Released as a preprint on March 28, 2025. Page 2. Human Preference Evaluation. Command A vs GPT-4o (Nov). Command A vs DeepSeek-V3. General.\n\n<p><strong>Introducing Command A: Max performance, minimal compute</strong><br>\n<a href=\"https://cohere.com/blog/command-a\" target=\"_blank\">https://cohere.com/blog/command-a</a><br>\nMar 13, 2025 Cohere Command A is on par or better than GPT-4o and DeepSeek-V3 across agentic enterprise tasks, with significantly greater efficiency.AI moves fast. We’ll keep you up to date with the latest. Enter your business email below to receive updates from Cohere. Please refer to our privacy policy for details or to contact us. You can unsubscribe at any time.</p>\n<p><strong>Cohere Command Models: AI-Powered Solutions for Enterprise</strong><br>\n<a href=\"https://cohere.com/command\" target=\"_blank\">https://cohere.com/command</a><br>\nJun 07, 2023 Cohere Command is a family of highly scalable language models that balances high performance with strong accuracy.Streamline content creation at scale. Experience a new era of content creation with Command and North. Effortlessly integrate AI into your workflows and quickly generate text, reports, product descriptions, marketing materials, and more at scale.</p>\n<p><strong>Announcing Command A | Cohere</strong><br>\n<a href=\"https://docs.cohere.com/changelog/command-a\" target=\"_blank\">https://docs.cohere.com/changelog/command-a</a><br>\n Command A is available today on the Cohere Platform, HuggingFace, or through the SDK with command-a-03-2025. For more information, check out our dedicated blog post.</p>\n<p><strong>Release Notes</strong><br>\n<a href=\"https://docs.cohere.com/changelog\" target=\"_blank\">https://docs.cohere.com/changelog</a><br>\nAug 23, 2021 Command A 03-2025 is the most performant Command model to date, delivering 150% of the throughput of its predecessor on only two GPUs. Embed v3.0 is a cutting-edge AI search model enhanced with multimodal capabilities, allowing it to generate embeddings from both text and images.March 4, 2025. Our Groundbreaking Multimodal Model, Aya Vision, is Here! Today, Cohere Labs, Cohere’s research arm, is proud to announce Aya Vision.</p>\n<p><strong>CohereLabs/c4ai-command-a-03-2025 · Hugging Face</strong><br>\n<a href=\"https://huggingface.co/CohereLabs/c4ai-command-a-03-2025\" target=\"_blank\">https://huggingface.co/CohereLabs/c4ai-command-a-03-2025</a><br>\nMar 13, 2025 Cohere Labs Command A is an open weights research release of a 111 billion parameter model optimized for demanding enterprises that require fast, secure, and high-quality AI. Compared to other leading proprietary and open-weights models Command A delivers maximum performance with minimum hardware costs, excelling on business-critical agentic and multilingual tasks while being deployable on just two GPUs.License:CC-BY-NC, requires also adhering to Cohere Lab's Acceptable Use Policy. Model: c4ai-command-a-03-2025.</p>\n<p><strong>Cohere Command A (New)</strong><br>\n<a href=\"https://docs.oracle.com/en-us/iaas/Content/generative-ai/cohere-command-a-03-2025.htm\" target=\"_blank\">https://docs.oracle.com/en-us/iaas/Content/generative-ai/cohere-command-a-03-2025.htm</a><br>\nSep 29, 2025 The cohere.command-a-03-2025 model is the most performant Cohere chat model to date with better throughput than cohere.command-r-08-2024. This model performs great for agentic enterprise tasks, and has significantly improved compute efficiency and has a 256,000 token context length.Base model: To run the cohere.command-a-03-2025 model on several endpoints, create as many endpoints as you need on a LARGE_COHERE_V3 cluster (unit‑size).</p>\n<p><strong>CohereForAI/c4ai-command-a-03-2025 at main</strong><br>\n<a href=\"https://huggingface.co/CohereForAI/c4ai-command-a-03-2025/tree/main\" target=\"_blank\">https://huggingface.co/CohereForAI/c4ai-command-a-03-2025/tree/main</a><br>\nMar 13, 2025 / c4ai-command-a-03-2025.This repository is publicly accessible, but you have to accept the conditions to access its files and content. By submitting this form, you agree to the License Agreement and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s Privacy Policy. You’ll receive email updates about C4AI and Cohere research, events, products and services.</p>\n<p><strong>Cohere Command A (New) - docs.oracle.com</strong><br>\n<a href=\"https://docs.oracle.com/en-us/iaas/Content/generative-ai/cohere-command-a-03-2025.htm?Highlight=2024\" target=\"_blank\">https://docs.oracle.com/en-us/iaas/Content/generative-ai/cohere-command-a-03-2025.htm?Highlight=2024</a><br>\n The cohere.command-a-03-2025 model is the most performant Cohere chat model to date with better throughput than cohere.command-r-08- 2024. This model performs great for agentic enterprise tasks, and has significantly improved compute efficiency and has a 256,000 token context length.</p>\n<p><strong>Cohere Command A</strong><br>\n<a href=\"https://docs.oracle.com/en-us/iaas/Content/generative-ai/benchmark-cohere-command-a-03-2025.htm\" target=\"_blank\">https://docs.oracle.com/en-us/iaas/Content/generative-ai/benchmark-cohere-command-a-03-2025.htm</a><br>\nMay 27, 2025 For example, a long job description generated from a short bullet list of items. For this case, the prompt length is fixed to 100 tokens and the response length is fixed to 1,000 tokens. The cohere.command-a-03-2025 model hosted on one LARGE_COHERE_V3 unit of a dedicated AI cluster for all regions except for the Saudi Arabia Central (Riyadh) and UAE East (Dubai) regions.</p>\n<p><strong>Gemini 2.5: Our most intelligent AI model</strong><br>\n<a href=\"https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/\" target=\"_blank\">https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/</a><br>\nMar 25, 2025 Gemini 2.5 Pro Experimental is our most advanced model for complex tasks. It tops the LMArena leaderboard — which measures human preferences — by a significant margin, indicating a highly capable model equipped with high-quality style. 2.5 Pro also shows strong reasoning and code capabilities, leading on common coding, math and science benchmarks.</p>\n<p><strong>Gemini 2.5 Pro update: Coding, web apps with Gemini</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-2-5-pro-updates/\" target=\"_blank\">https://blog.google/products/gemini/gemini-2-5-pro-updates/</a><br>\n Today we're releasing early access to Gemini 2.5 Pro Preview (I/O edition), an updated version of 2.5 Pro that has significantly improved capabilities for coding, especially building compelling interactive web apps.</p>\n<p><strong>Google I/O 2025: Updates to Gemini 2.5 from Google DeepMind</strong><br>\n<a href=\"https://blog.google/technology/google-deepmind/google-gemini-updates-io-2025/\" target=\"_blank\">https://blog.google/technology/google-deepmind/google-gemini-updates-io-2025/</a><br>\nMay 20, 2025 At I/O 2025, we shared updates to our Gemini 2.5 model series and Deep Think, an experimental enhanced reasoning mode for 2.5 Pro.2.5 Pro performance. Deep Think. 2.5 Flash improvements. New 2.5 capabilities. Developer experience. In March, we announced Gemini 2.5 Pro, our most intelligent model yet, and two weeks ago, we brought you our I/O update early for developers to build incredible web apps. Today, we’re sharing even more updates to our Gemini 2.5 model series</p>\n<p><strong>Gemini 2.5 Pro: Access Google’s latest preview AI model</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-2-5-pro-latest-preview/\" target=\"_blank\">https://blog.google/products/gemini/gemini-2-5-pro-latest-preview/</a><br>\n We’re introducing an upgraded preview of Gemini 2.5 Pro, our most intelligent model yet. Building on the version we released in May and showed at I/O, this model will be the generally available, stable version starting in a couple of weeks, ready for enterprise-scale applications.</p>\n<p><strong>Gemini App: 7 updates from Google I/O 2025</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-app-updates-io-2025/\" target=\"_blank\">https://blog.google/products/gemini/gemini-app-updates-io-2025/</a><br>\nMay 20, 2025 Gemini gets more personal, proactive and powerful. May 20, 2025.Google AI Pro and Google AI Ultra. The Gemini app has seen incredible momentum recently—and today, we’re introducing new capabilities to help you do even more. Here’s what we announced at Google IO: Gemini Live with camera and screen sharing, is now free on Android and iOS for everyone, so you can point your phone at anything and talk it through.</p>\n<p><strong>Gemini 2.5 Native Audio upgrade, plus text-to-speech model updates</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-audio-model-updates/\" target=\"_blank\">https://blog.google/products/gemini/gemini-audio-model-updates/</a><br>\n An upgraded Gemini 2.5 Native Audio model across Google products and live speech translation in the Google Translate app.</p>\n<p><strong>Introducing Gemini 3 Flash: Benchmarks, global availability</strong><br>\n<a href=\"https://blog.google/products/gemini/gemini-3-flash/\" target=\"_blank\">https://blog.google/products/gemini/gemini-3-flash/</a><br>\n \"Gemini 3 Flash: frontier intelligence built for speed\" introduces a fast, efficient AI model. Gemini 3 Flash offers Pro-grade reasoning at Flash-level speed and a lower cost. It's great for coding, complex analysis, and quick answers in interactive apps. Gemini 3 Flash is now the default model in the Gemini app and AI Mode in Search.</p>\n<p><strong>Google Gemini</strong><br>\n<a href=\"https://gemini.google.com/model/2-5-pro\" target=\"_blank\">https://gemini.google.com/model/2-5-pro</a><br>\nMay 01, 2025 Meet Gemini, Google's AI assistant. Get help with writing, planning, brainstorming, and more. Experience the power of generative AI.</p>\n<p><strong>Gemini Apps’ release updates & improvements</strong><br>\n<a href=\"https://gemini.google.com/updates?hl=en%2F\" target=\"_blank\">https://gemini.google.com/updates?hl=en%2F</a><br>\nNov 29, 2024 2025.03.25. Priority access with Gemini Advanced: Try our latest 2.5 Pro (experimental) model. What: Today we’re introducing Gemini 2.5, our most intelligent AI model.2025.02.20. Document upload is now available to all Gemini users. What: You can seamlessly upload multiple Google Docs, PDFs, and Word documents from Google Drive or your device into Gemini for quick summaries, personalized feedback, and actionable insights.</p>\n<p><strong>Gemini 2.5 Pro Preview: even better coding performance</strong><br>\n<a href=\"https://developers.googleblog.com/en/gemini-2-5-pro-io-improved-coding-performance/\" target=\"_blank\">https://developers.googleblog.com/en/gemini-2-5-pro-io-improved-coding-performance/</a><br>\n Today we’re excited to release Gemini 2.5 Pro Preview (I/O edition). This update features even stronger coding capabilities, for you to start building with before Google I/O later this month.</p>\n<p><strong>Qwen</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Qwen\" target=\"_blank\">https://en.wikipedia.org/wiki/Qwen</a><br>\nJul 03, 2016 On April 28, 2025, the Qwen3 model family was released,[30] with all models licensed under the Apache 2.0 license.On September 22, 2025, Qwen3-Omni was release under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms like Hugging Face and Model Scope. Qwen3-Omni is a mixed/multimodal model that can process text, images, audio, and video, and deliver real-time streaming responses in both text and natural speech.[36]. List of models. Version. Release date.</p>\n<p><strong>List of horror films of 2025 - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/List_of_horror_films_of_2025\" target=\"_blank\">https://en.wikipedia.org/wiki/List_of_horror_films_of_2025</a><br>\n This is a list of horror films that are expected to be released in 2025. This list includes films that are classified as horror, as well as other subgenres. They are listed in alphabetical order.</p>\n<p><strong>Spider-Man: Beyond the Spider-Verse - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Spider-Man:_Beyond_the_Spider-Verse\" target=\"_blank\">https://en.wikipedia.org/wiki/Spider-Man:_Beyond_the_Spider-Verse</a><br>\n After the second film's release and the end of the strike, work resumed in December. The writers and directors were revealed a year later, and a new release date was announced in March 2025. Spider-Man: Beyond the Spider-Verse is scheduled to be released in the United States on June 18, 2027.</p>\n<p><strong>Alibaba Introduces Qwen3, Setting New Benchmark in Open-Source AI with ...</strong><br>\n<a href=\"https://www.alibabagroup.com/en-US/document-1853940226976645120\" target=\"_blank\">https://www.alibabagroup.com/en-US/document-1853940226976645120</a><br>\n April 29, 2025 Alibaba has launched Qwen3, the latest generation of its open-sourced large language model (LLM) family, setting a new benchmark for AI innovation.</p>\n<p><strong>Qwen3 Release and Overview: How it is Different From Other LLM...</strong><br>\n<a href=\"https://www.purevpn.com/blog/news/qwen3-release-and-overview-how-it-is-different-from-other-llm-models/\" target=\"_blank\">https://www.purevpn.com/blog/news/qwen3-release-and-overview-how-it-is-different-from-other-llm-models/</a><br>\nApr 29, 2025 Qwen3 Release and Overview. In late April 2025, Alibaba’s DAMO Academy announced Qwen3, the third-generation large language model in its open-source Tongyi Qwen series . Qwen3 was officially released on April 29, 2025 and is made available under a permissive (Apache 2.0) open-source license . The Qwen3 family includes eight models ranging from 0.6 billion to 235 billion parameters .</p>\n<p><strong>Qwen3 will be released in the second week of April</strong><br>\n<a href=\"https://www.reddit.com/r/LocalLLaMA/comments/1jpbnih/qwen3_will_be_released_in_the_second_week_of_april/\" target=\"_blank\">https://www.reddit.com/r/LocalLLaMA/comments/1jpbnih/qwen3_will_be_released_in_the_second_week_of_april/</a><br>\nMar 31, 2025 Alibaba is set to release its new model, Qwen3, in the second week of April 2025. This will be Alibaba's most significant model product in the first half of ...</p>\n<p><strong>Qwen3 models unveiled — everything you need to know - Medium</strong><br>\n<a href=\"https://medium.com/@qki-analytics/qwen3-models-unveiled-everything-you-need-to-know-1ce8bc9d08a8\" target=\"_blank\">https://medium.com/@qki-analytics/qwen3-models-unveiled-everything-you-need-to-know-1ce8bc9d08a8</a><br>\n On April 28, 2025 , Alibaba Cloud made headlines by launching Qwen3 , the latest iteration of its open-source large language model (LLM) family. This release marks a pivotal moment in the...</p>\n<p><strong>Qwen 3 Benchmarks, Comparisons, Model... - DEV Community</strong><br>\n<a href=\"https://dev.to/best_codes/qwen-3-benchmarks-comparisons-model-specifications-and-more-4hoa\" target=\"_blank\">https://dev.to/best_codes/qwen-3-benchmarks-comparisons-model-specifications-and-more-4hoa</a><br>\nMay 01, 2025 Released on April 29, 2025, Qwen3 comes in eight sizes, including both dense models (from 600M to 32B parameters) and Mixture-of-Experts (MoE) giants, like the flagship Qwen3-235B. These MoE models activate only a small slice of their total parameters at a time (like 22B out of 235B), so you get high performance without insane compute requirements.</p>\n<p><strong>Qwen 3: Alibaba's Leap Forward in Open-Source AI - DEV Community</strong><br>\n<a href=\"https://dev.to/mechcloud_academy/qwen-3-alibabas-leap-forward-in-open-source-ai-4hg3\" target=\"_blank\">https://dev.to/mechcloud_academy/qwen-3-alibabas-leap-forward-in-open-source-ai-4hg3</a><br>\nApr 30, 2025 Alibaba's Qwen team has unveiled Qwen 3, the latest iteration of their large language model series, released on April 29, 2025.By releasing model weights and source code, Alibaba encourages innovation, enabling fine-tuning and custom deployments. Compared to DeepSeek’s MoE models, Qwen 3 offers greater versatility and a broader model range. While DeepSeek-V3 activates 37 billion of 671 billion parameters, Qwen3-235B-A22B’s 22 billion active parameters deliver comparable performance with better scalability.</p>\n<p><strong>2025 Complete Guide: Qwen3-235B-A22B-Thinking-2507 - The New...</strong><br>\n<a href=\"https://dev.to/czmilo/2025-complete-guide-qwen3-235b-a22b-thinking-2507-the-new-benchmark-for-open-source-thinking-419d\" target=\"_blank\">https://dev.to/czmilo/2025-complete-guide-qwen3-235b-a22b-thinking-2507-the-new-benchmark-for-open-source-thinking-419d</a><br>\nJul 25, 2025</p>\n<p><strong>Nano Banana Pro - Gemini AI image generator & photo editor</strong><br>\n<a href=\"https://gemini.google/overview/image-generation/\" target=\"_blank\">https://gemini.google/overview/image-generation/</a><br>\n With Nano Banana Pro, Gemini's AI image generator and photo editor, you can create high-quality images and edit photos into countless new creations.</p>\n<p><strong>Introducing Gemini 2.5 Flash Image, our state-of-the-art image model</strong><br>\n<a href=\"https://developers.googleblog.com/en/introducing-gemini-2-5-flash-image/\" target=\"_blank\">https://developers.googleblog.com/en/introducing-gemini-2-5-flash-image/</a><br>\n Today, we’re excited to introduce Gemini 2.5 Flash Image (aka nano-banana), our state-of-the-art image generation and editing model.</p>\n<p><strong>Google’s new Nano Banana Pro uses Gemini 3 power... - Ars Technica</strong><br>\n<a href=\"https://arstechnica.com/google/2025/11/google-launches-nano-banana-pro-image-model-adds-ai-image-detection-in-gemini-app/\" target=\"_blank\">https://arstechnica.com/google/2025/11/google-launches-nano-banana-pro-image-model-adds-ai-image-detection-in-gemini-app/</a><br>\nNov 20, 2025 Nano Banana Pro: Your new creative partner. Google says the new model can follow complex prompts to create more accurate images. The model is apparently so capable that it can generate an entire usable infographic in a single shot with no weird AI squiggles in place of words. Nano Banana Pro is also better at maintaining consistency in images.</p>\n<p><strong>Google improves Gemini AI image editing with “nano banana” model</strong><br>\n<a href=\"https://arstechnica.com/ai/2025/08/google-improves-gemini-ai-image-editing-with-nano-banana-model/\" target=\"_blank\">https://arstechnica.com/ai/2025/08/google-improves-gemini-ai-image-editing-with-nano-banana-model/</a><br>\nAug 26, 2025 A new model, known as “nano banana,” started making the rounds with impressive abilities that landed it at the top of the LMArena leaderboard. Now, Google has revealed that nano banana is an innovation from Google DeepMind, and it’s being rolled out to the Gemini app today. AI image editing allows you to modify images with a prompt rather than mucking around in Photoshop.</p>\n<p><strong>13 of the best Nano Banana trends from 2025</strong><br>\n<a href=\"https://blog.google/products/gemini/nano-banana-google-trends-2025/\" target=\"_blank\">https://blog.google/products/gemini/nano-banana-google-trends-2025/</a><br>\nDec 29, 2025 13 of the best Nano Banana trends from 2025.People immediately loved the first version of the model, Gemini 2.5 Flash Image, thanks to its ability to maintain a consistent look across edits, blend photos together and otherwise use advanced editing to bring prompts to life. We quickly brought Nano Banana from the Gemini app to other products, like Search and NotebookLM, then in November released Nano Banana Pro. Built on Gemini 3 Pro, Nano Banana Pro uses Gemini’s state-of-the-art reasoning and real-world knowledge to visualize information better than ever before.</p>\n<p><strong>Introducing Nano Banana Pro</strong><br>\n<a href=\"https://blog.google/technology/ai/nano-banana-pro/\" target=\"_blank\">https://blog.google/technology/ai/nano-banana-pro/</a><br>\nNov 20, 2025 Google DeepMind introduces Nano Banana Pro, a new image generation and editing model built on Gemini 3 Pro. You can use it to create accurate visuals with legible text in multiple languages. Try Nano Banana Pro today across Google products like the Gemini app, Google Ads, and Google AI Studio. Summaries were generated by Google AI.</p>\n<p><strong>Gemini Image Models (Nano Banana)</strong><br>\n<a href=\"https://deepmind.google/models/gemini-image/\" target=\"_blank\">https://deepmind.google/models/gemini-image/</a><br>\nNov 20, 2025 The Gemini Image model uses deep language understanding to capture the nuance of your prompts — bridging the gap between what you say and what you envision.Model family. Gemini Image models are natively multimodal, and respond effectively and efficiently to even the most detailed prompts. Nano Banana Pro (Gemini 3 Pro Image).</p>\n<p><strong>Nano Banana - Wikipedia</strong><br>\n<a href=\"https://en.wikipedia.org/wiki/Nano_banana\" target=\"_blank\">https://en.wikipedia.org/wiki/Nano_banana</a><br>\nSep 10, 2025 Nano Banana and Nano Banana Pro are image generation and editing models powered by generative artificial intelligence and developed by Google DeepMind, a subsidiary of Google. A text-to-image variant of the Gemini family of large language models, they were launched in August 2025 and November 2025 respectively as features within the Gemini chatbot and other Google products.</p>\n<p><strong>Gemini 2.5 Flash Image (Nano Banana) - Google DeepMind</strong><br>\n<a href=\"https://deepmind.google/models/gemini-image/flash/\" target=\"_blank\">https://deepmind.google/models/gemini-image/flash/</a><br>\nAug 26, 2025 Generate detailed images with Nano Banana, using text and image prompts.AI-generated image of: a side-by-side comparison. On the left is an input image of two simple blue cartoon characters. On the right, a detailed, vintage-style generated scene features these characters in a 1960s recording studio, with the larger one wearing headphones at a mixing console and the smaller one on a stool adjusting a reel-to-reel tape machine.</p>\n<p><strong>Google outlines the best Nano Banana AI image trends from 2025</strong><br>\n<a href=\"https://chromeunboxed.com/google-outlines-the-best-nano-banana-ai-image-trends-from-2025/\" target=\"_blank\">https://chromeunboxed.com/google-outlines-the-best-nano-banana-ai-image-trends-from-2025/</a><br>\nDec 30, 2025 Google outlines the best Nano Banana AI image trends from 2025 December 30, 2025 By Robby Payne View Comments It has been a big year for Google’s image generation and editing model, Nano Banana.</p>\n<p><strong>Introducing Gemma 3n: The developer guide</strong><br>\n<a href=\"https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/\" target=\"_blank\">https://developers.googleblog.com/en/introducing-gemma-3n-developer-guide/</a><br>\nJun 26, 2025 JUNE 26, 2025.Gemma 3n is designed for the developer community that helped shape Gemma. It’s supported by your favorite tools including Hugging Face Transformers, llama.cpp, Google AI Edge, Ollama, MLX, and many others, enabling you to fine-tune and deploy for your specific on-device applications with ease. This post is the developer deep dive: we'll explore some of the innovations behind Gemma 3n, share new benchmark results, and show you how to start building today.</p>\n<p><strong>Search - Google Developers Blog</strong><br>\n<a href=\"https://developers.googleblog.com/en/search/?product_categories=Gemma\" target=\"_blank\">https://developers.googleblog.com/en/search/?product_categories=Gemma</a><br>\nJun 27, 2024 JUNE 26, 2025 / Gemma. Introducing Gemma 3n: The developer guide. The Gemma 3n model has been fully released, building on the success of previous Gemma models and bringing advanced on-device multimodal capabilities to edge devices with unprecedented performance. Explore Gemma 3n's innovations, including its mobile-first architecture, MatFormer technology, Per-Layer Embeddings, KV Cache Sharing, and new audio and MobileNet-V5 vision encoders, and how developers can start building with it today.</p>\n<p><strong>Announcing Gemma 3n preview: powerful, efficient, mobile-first AI</strong><br>\n<a href=\"https://developers.googleblog.com/en/introducing-gemma-3n/\" target=\"_blank\">https://developers.googleblog.com/en/introducing-gemma-3n/</a><br>\nMay 20, 2025 Introducing Gemma 3n – the latest Google open model for accessible AI, featuring unique flexibility, privacy, and expanded multimodal capabilities on mobile devices.You can get started with text and image understanding/generation capabilities today. Gemma 3n marks the next step in democratizing access to cutting-edge, efficient AI. We’re incredibly excited to see what you’ll build as we make this technology progressively available, starting with today's preview. Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.</p>\n<p><strong>Google officially releases the lightweight open-weight model 'Gemma 3n ...</strong><br>\n<a href=\"https://gigazine.net/gsc_news/en/20250627-google-gemma-3n-full-release\" target=\"_blank\">https://gigazine.net/gsc_news/en/20250627-google-gemma-3n-full-release</a><br>\nJun 27, 2025 The Gemma series has a wide variety of models, from large to small, and the Gemma 3n, whose early preview version was released in May 2025, has attracted attention as a small model that can run on a smartphone but has high performance. Gemma 3n was officially released on June 26, 2025, and model data can now be downloaded from Hugging Face and Kaggle .</p>\n<p><strong>Gemma 3n fully available in the open-source ecosystem!</strong><br>\n<a href=\"https://huggingface.co/blog/gemma3n\" target=\"_blank\">https://huggingface.co/blog/gemma3n</a><br>\nJun 26, 2025</p>\n<p><strong>google/gemma-3n-E4B</strong><br>\n<a href=\"https://huggingface.co/google/gemma-3n-E4B\" target=\"_blank\">https://huggingface.co/google/gemma-3n-E4B</a><br>\nJul 10, 2025 Gemma 3n models are designed for efficient execution on low-resource devices. They are capable of multimodal input, handling text, image, video, ...</p>\n<p><strong>google/gemma-3n-E2B · Hugging Face</strong><br>\n<a href=\"https://huggingface.co/google/gemma-3n-E2B\" target=\"_blank\">https://huggingface.co/google/gemma-3n-E2B</a><br>\nJun 26, 2025 Description. Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3n models are designed for efficient execution on low-resource devices. They are capable of multimodal input, handling text, image, video, and audio input, and generating text outputs, with open weights for pre-trained and instruction-tuned variants. These models were trained with data in over 140 spoken languages.</p>\n<p><strong>google/gemma-3n-E2B-it-litert-lm · Hugging Face</strong><br>\n<a href=\"https://huggingface.co/google/gemma-3n-E2B-it-litert-lm\" target=\"_blank\">https://huggingface.co/google/gemma-3n-E2B-it-litert-lm</a><br>\nJun 10, 2025 Gemma 3n models use selective parameter activation technology to reduce resource requirements. This technique allows the models to operate at an effective size of 2B and 4B parameters, which is lower than the total number of parameters they contain. For more information on Gemma 3n's efficient parameter management technology, see the Gemma 3n page.These models were trained on a dataset that includes a wide variety of sources totalling approximately 11 trillion tokens. The knowledge cutoff date for the training data was June 2024. Here are the key components</p>\n<p><strong>Google Launches Gemma 3N: The Most Powerful Open-Source AI ... - Medium</strong><br>\n<a href=\"https://medium.com/@amitXD/google-launches-gemma-3n-the-most-powerful-open-source-ai-model-for-phones-edge-devices-0a1c7f536c76\" target=\"_blank\">https://medium.com/@amitXD/google-launches-gemma-3n-the-most-powerful-open-source-ai-model-for-phones-edge-devices-0a1c7f536c76</a><br>\nJun 30, 2025 On June 26, 2025, they officially launched Gemma 3N, a next-gen multimodal AI model designed to run on smartphones, Raspberry Pi boards, and other low-power edge devices — without compromising intelligence. This isn’t just a small model pretending to be big. It’s a frontier-level LLM that’s compact, multilingual, multimodal, and blazing fast — while fitting into just 2–3 GB of VRAM.</p>\n<p><strong>Gemma 3n model overview - Google AI for Developers</strong><br>\n<a href=\"https://ai.google.dev/gemma/docs/gemma-3n\" target=\"_blank\">https://ai.google.dev/gemma/docs/gemma-3n</a><br>\n Gemma 3n is a generative AI model optimized for use in everyday devices, such as phones, laptops, and tablets.</p>\n</details>\n<details><summary>Response planned</summary>\n<p>I have gathered comprehensive information about AI model releases throughout 2025 from multiple sources including Wikipedia, company blogs, tech news sites, and official documentation. The data covers:</p>\n<ul>\n<li>Major LLM releases from OpenAI (GPT-4.5, GPT-4.1, o3, o4-mini, GPT-5, GPT-5.2), Anthropic (Claude 3.7, Claude 4, Claude 4.1, Claude 4.5, Claude Opus 4.5), Google (Gemini 2.0, 2.5, 3 series), Meta (Llama 4), xAI (Grok 3, Grok 4), DeepSeek (R1, V3 updates), Alibaba (Qwen 2.5 Max, Qwen3), Mistral (Small 3, Large 3), and others</li>\n<li>Video generation models: Sora 2, Veo 3, Veo 3.1, Runway Gen-4, Gen-4.5</li>\n<li>Image models: GPT-4o image generation, Imagen 4, Nano Banana, Midjourney V7, Adobe Firefly Image 4/5</li>\n<li>Agentic frameworks: OpenAI Operator, Claude Code, Codex CLI, Claude Agent SDK</li>\n<li>Specialized models: coding models, reasoning models, multimodal models</li>\n</ul>\n<p>The information is well-sourced with specific dates for most releases.</p>\n<p>Structure the response as a comprehensive chronological timeline from January to December 2025, organized by month. Each entry will be a bullet point with one sentence describing the release. Cover:</p>\n<ul>\n<li>January: DeepSeek-R1, Qwen 2.5 Max, Mistral Small 3, Gemini 2.0 Flash, o3-mini, Operator, Doubao Realtime Voice</li>\n<li>February: Deep Research, Gemini 2.0 Pro, Le Chat, Grok 3, Claude 3.7 Sonnet + Claude Code, GPT-4.5</li>\n<li>March: Gemini 2.5 Pro, DeepSeek-V3-0324, Gemma 3, Cohere Command A, GPT-4o image generation, Mistral Small 3.1, Runway Gen-4</li>\n<li>April: Llama 4, o3/o4-mini, GPT-4.1, Midjourney V7, Adobe Firefly Image 4, Qwen3</li>\n<li>May: AlphaEvolve, Veo 3, Claude 4, Codex, Flow, Imagen 4, Gemma 3n preview</li>\n<li>June: Gemma 3n full release, Mistral Small 3.2</li>\n<li>July: Grok 4, GLM-4.5</li>\n<li>August: GPT-OSS, Claude 4.1, GPT-5, DeepSeek-V3.1, Nano Banana</li>\n<li>September: Apertus, Claude 4.5, Sora 2, Claude Code 2.0</li>\n<li>October: Veo 3.1, Adobe Firefly Image 5</li>\n<li>November: Claude Opus 4.5, ERNIE 5.0, Nano Banana Pro</li>\n<li>December: Mistral 3, DeepSeek-V3.2, Devstral 2, GPT-5.2, Gemini 3 Flash</li>\n</ul>\n</details><h1>The 2025 Generative AI Release Timeline</h1>\n<h2>January 2025</h2>\n<ul>\n<li><strong>DeepSeek-R1</strong> (January 20): DeepSeek released DeepSeek-R1, an open-source reasoning model based on DeepSeek-V3 that achieves performance comparable to OpenAI o1 across math, code, and reasoning tasks using chain-of-thought methodology.</li>\n<li><strong>Doubao Realtime Voice Model</strong> (January 20): ByteDance launched the Doubao Realtime Voice Model, an integrated voice understanding and generation model enabling end-to-end speech dialogues on the Doubao app. <sup class=\"trigger\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\">6</a><span class=\"tooltip top center citation\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJyulldmP2ETUYh6LPtXMsl6zq0TPxWtuvKb1xoaDmm9IYdi8Rkg1hdfsQOywf8-KL\"> timeline.the-blueprint.ai</span><b>6. Generative AI Timeline | The Blueprint</b></a><span class=\"passage\">28 October 2025. Adobe release Firefly Image 5 ; 21 October 2025. OpenAI introduce ChatGPT Atlas browser ; 16 October 2025. Anthropic introduce Agent Skills ; 15 ...</span></span></sup></li>\n<li><strong>OpenAI Operator</strong> (January 23): OpenAI launched Operator, a general-purpose AI agent powered by the Computer-Using Agent (CUA) model that can autonomously control a web browser to perform tasks like clicking, typing, and scrolling. <sup class=\"trigger\"><a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html#:~:text=Artificial%20intelligence,and%20DeepSeek.\">8</a><span class=\"tooltip top center citation\"><a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html#:~:text=Artificial%20intelligence,and%20DeepSeek.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ2QLXvGCU8S3z9xamAa4vPpIb_FiSmI5IPPHyynQHBb0%3D\"> www.cnbc.com</span><b>8. French AI lab Mistral releases new AI models as it looks to keep pace ...</b></a><span class=\"passage\">Artificial intelligence startup Mistral released a new suite of models Tuesday as it looks to keep pace with leading AI labs Google, OpenAI and DeepSeek.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\">15</a><span class=\"tooltip top center citation\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJVsUwoxweF2cHESR7OzfZ3x2uhGK06FCwMyTe0wXYCqE%3D\"> www.qualcomm.com</span><b>15. AI Timeline | Innovations and Advancements</b></a><span class=\"passage\">See how generative AI has evolved, from early attempts to cutting-edge transformers. Uncover the advancements that revolutionized AI.</span></span></sup></li>\n<li><strong>Qwen 2.5 Max</strong> (January 25): Alibaba released Qwen2.5-Max, a large-scale MoE model that surpassed DeepSeek-V3 and competed with leading proprietary models across various benchmarks. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,is%20open-source.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,is%20open-source.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence</b></a><span class=\"passage\">The following is a list of events of the year 2025 in artificial intelligence, as well as predicted and scheduled events that have not yet occurred.January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.[1] DeepSeek-R1 is open-source.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\">16</a><span class=\"tooltip top center citation\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJmlTltlSNyuFEPES75cI2m9n850JedvMmgOhqRISd0U0%3D\"> www.shakudo.io</span><b>16. Top 9 Large Language Models as of December 2025</b></a><span class=\"passage\">As of late 2024 and mid-2025, DeepSeek has been actively releasing and updating its models, including the DeepSeek V3. ... Mistral AI, a prominent player ...</span></span></sup></li>\n<li><strong>Mistral Small 3</strong> (January 30): Mistral AI released Mistral Small 3, a latency-optimized 24B-parameter open-source model under Apache 2.0 license, competitive with Llama 3.3 70B while being 3x faster.</li>\n<li><strong>Gemini 2.0 Flash</strong> (January 30): Google released Gemini 2.0 Flash as the new default model in the Gemini app, featuring superior speed, built-in tool use, multimodal generation, and a 1M token context window. <sup class=\"trigger\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\">4</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>4. The Latest Generative AI Models in 2025: A Comprehensive Guide</b></a><span class=\"passage\">What started as text-based language models has now exploded into sophisticated multimodal systems that understand and generate text, images, audio, and video, often in real-time. In this blog post, I bring you the most up-to-date and comprehensive roundup of generative AI models released in 2025, across companies like OpenAI, Google DeepMind, Anthropic, Alibaba, and more.As of May 2025, the evolution of generative AI models shows some clear trends: Reasoning Before Responding: Claude 3.7 and ...</span></span></sup></li>\n<li><strong>OpenAI o3-mini</strong> (January 31): OpenAI released o3-mini, a cost-efficient reasoning model with three reasoning effort levels (low, medium, high), available to all ChatGPT users including free-tier. <sup class=\"trigger\"><a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html#:~:text=Artificial%20intelligence,and%20DeepSeek.\">8</a><span class=\"tooltip top center citation\"><a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html#:~:text=Artificial%20intelligence,and%20DeepSeek.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ2QLXvGCU8S3z9xamAa4vPpIb_FiSmI5IPPHyynQHBb0%3D\"> www.cnbc.com</span><b>8. French AI lab Mistral releases new AI models as it looks to keep pace ...</b></a><span class=\"passage\">Artificial intelligence startup Mistral released a new suite of models Tuesday as it looks to keep pace with leading AI labs Google, OpenAI and DeepSeek.</span></span></sup></li>\n</ul>\n<h2>February 2025</h2>\n<ul>\n<li><strong>ChatGPT Deep Research</strong> (February 3): OpenAI released Deep Research, an AI system integrated into ChatGPT that autonomously browses the web for 5-30 minutes to generate comprehensive cited reports on user-specified topics. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/timeline/#:~:text=Time%20between,My%20Cheese%3F\">7</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/timeline/#:~:text=Time%20between,My%20Cheese%3F\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>7. Timeline of AI and language models – Dr Alan D. Thompson</b></a><span class=\"passage\">Time between releases of OpenAI’s GPT models. Model. Months since last release.Living documents Alan’s conservative countdown to AGI Alan’s ASI checklist Alan’s technical highlights (new tab) ALPrompt Calculations and conversions Compute Table (new tab) Datasets Table LLM-Reset/LLM-Normalize Models Table Timeline of AI and language models Visualizations The Who Moved My Cheese?</span></span></sup></li>\n<li><strong>Gemini 2.0 Pro</strong> (February 5): Google released Gemini 2.0 Pro alongside the GA version of Gemini 2.0 Flash, offering enhanced quality and capabilities for the agentic era. <sup class=\"trigger\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\">4</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>4. The Latest Generative AI Models in 2025: A Comprehensive Guide</b></a><span class=\"passage\">What started as text-based language models has now exploded into sophisticated multimodal systems that understand and generate text, images, audio, and video, often in real-time. In this blog post, I bring you the most up-to-date and comprehensive roundup of generative AI models released in 2025, across companies like OpenAI, Google DeepMind, Anthropic, Alibaba, and more.As of May 2025, the evolution of generative AI models shows some clear trends: Reasoning Before Responding: Claude 3.7 and ...</span></span></sup><sup class=\"trigger\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\">9</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>9. Alan’s conservative countdown to AGI – Dr Alan D. Thompson...</b></a><span class=\"passage\">Job listing, archive. Apr/2025. INFO: OpenAI Memory: ‘ChatGPT will reference your past conversations to recall useful information you’ve shared. It uses this to learn about your interests and preferences, helping make future chats more personalized and relevant.’ and ‘You can also teach ChatGPT something new by saying it in a chat‘, however, ‘it doesn’t retain every detail.’Timeline of AI and language models Timeline of AI and language models. AI + IQ testing (human vs AI) AI + IQ testing ...</span></span></sup></li>\n<li><strong>Mistral Le Chat</strong> (February 6): Mistral AI released Le Chat, an AI assistant capable of generating responses at up to 1,000 words per second. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\">5</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>5. Inside language models (from GPT to Nova) – Dr Alan D. Thompson...</b></a><span class=\"passage\">Language model sizes Summary of current models Count of LLMs released per month (2024) Compute Context windows Achievements unlocked: Emergent abilities of LLMs Large language models: API or on-premise Increasing dataset sizes 2018-2025 GPT-3’s top 10 datasets by domain/source Contents of GPT-3 & the Pile v1 Contents of Chinese models Language model sizes & predictions [...]What’s in my AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, ...</span></span></sup></li>\n<li><strong>Imagen 3 API</strong> (February 6): Google made Imagen 3, its state-of-the-art image generation model, available through the Gemini API for developers. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%2027,enhancing%20efficiency.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%2027,enhancing%20efficiency.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* 27 March – Engineers in the fields of electrical and computer science create a groundbreaking AI system known as \"Super-Turing AI,\" designed to function more similarly to the human brain. Unlike conventional models that isolate tasks and shuttle massive volumes of data between components, this innovative AI merges key processes, streamlining operations and enhancing efficiency.</span></span></sup></li>\n<li><strong>Grok 3</strong> (February 17-19): xAI released Grok 3, claiming it to be \"the smartest AI on Earth\" with 10x the compute of previous models, achieving 93.3% on AIME 2025 and 84.6% on GPQA benchmarks. <sup class=\"trigger\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\">15</a><span class=\"tooltip top center citation\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJVsUwoxweF2cHESR7OzfZ3x2uhGK06FCwMyTe0wXYCqE%3D\"> www.qualcomm.com</span><b>15. AI Timeline | Innovations and Advancements</b></a><span class=\"passage\">See how generative AI has evolved, from early attempts to cutting-edge transformers. Uncover the advancements that revolutionized AI.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.cloudzero.com/blog/top-ai-models/#:~:text=Open-weight%20and,tweak%20them\">27</a><span class=\"tooltip top center citation\"><a href=\"https://www.cloudzero.com/blog/top-ai-models/#:~:text=Open-weight%20and,tweak%20them\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJikAiVN7mlh6SnIIWo2Lfd1A6hyOkpROkFnRO_obMH5E%3D\"> www.cloudzero.com</span><b>27. The Top AI Models And Trends Shaping SaaS in 2025</b></a><span class=\"passage\">Open-weight and open-source models. LLaMA 4, Mistral, and DeepSeek's open releases let you run models on your own infrastructure, tweak them ...</span></span></sup></li>\n<li><strong>Claude 3.7 Sonnet & Claude Code</strong> (February 24): Anthropic released Claude 3.7 Sonnet, the first hybrid reasoning model with extended thinking capabilities, alongside Claude Code, an agentic coding tool for terminal-based development. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\">5</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>5. Inside language models (from GPT to Nova) – Dr Alan D. Thompson...</b></a><span class=\"passage\">Language model sizes Summary of current models Count of LLMs released per month (2024) Compute Context windows Achievements unlocked: Emergent abilities of LLMs Large language models: API or on-premise Increasing dataset sizes 2018-2025 GPT-3’s top 10 datasets by domain/source Contents of GPT-3 & the Pile v1 Contents of Chinese models Language model sizes & predictions [...]What’s in my AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, ...</span></span></sup><sup class=\"trigger\"><a href=\"https://aiflashreport.com/model-releases.html#:~:text=Track%20the,and%20availability.\">14</a><span class=\"tooltip top center citation\"><a href=\"https://aiflashreport.com/model-releases.html#:~:text=Track%20the,and%20availability.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJFoD1sr1AkJsHlePc9uSLDismywaM4jmYPrK44I_zVKo%3D\"> aiflashreport.com</span><b>14. AI Model Release Timeline - AI Flash Report</b></a><span class=\"passage\">Track the latest AI model releases: GPT-4, Claude, Mistral, and more. Comprehensive timeline with specs, performance metrics, and availability.</span></span></sup></li>\n<li><strong>YandexGPT 5</strong> (February 25): Yandex released YandexGPT 5 Lite Pretrain and Pro versions for use in the Alice Neural Network chatbot. <sup class=\"trigger\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\">4</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>4. The Latest Generative AI Models in 2025: A Comprehensive Guide</b></a><span class=\"passage\">What started as text-based language models has now exploded into sophisticated multimodal systems that understand and generate text, images, audio, and video, often in real-time. In this blog post, I bring you the most up-to-date and comprehensive roundup of generative AI models released in 2025, across companies like OpenAI, Google DeepMind, Anthropic, Alibaba, and more.As of May 2025, the evolution of generative AI models shows some clear trends: Reasoning Before Responding: Claude 3.7 and ...</span></span></sup></li>\n<li><strong>Gemini 2.0 Flash-Lite</strong> (February 25): Google released Gemini 2.0 Flash-Lite, a lighter version optimized for speed and efficiency. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\">5</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>5. Inside language models (from GPT to Nova) – Dr Alan D. Thompson...</b></a><span class=\"passage\">Language model sizes Summary of current models Count of LLMs released per month (2024) Compute Context windows Achievements unlocked: Emergent abilities of LLMs Large language models: API or on-premise Increasing dataset sizes 2018-2025 GPT-3’s top 10 datasets by domain/source Contents of GPT-3 & the Pile v1 Contents of Chinese models Language model sizes & predictions [...]What’s in my AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, ...</span></span></sup></li>\n<li><strong>GPT-4.5</strong> (February 27): OpenAI announced GPT-4.5 as a research preview, its largest non-reasoning model to date, later reported to be indistinguishable from humans in text conversations. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\">9</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>9. Alan’s conservative countdown to AGI – Dr Alan D. Thompson...</b></a><span class=\"passage\">Job listing, archive. Apr/2025. INFO: OpenAI Memory: ‘ChatGPT will reference your past conversations to recall useful information you’ve shared. It uses this to learn about your interests and preferences, helping make future chats more personalized and relevant.’ and ‘You can also teach ChatGPT something new by saying it in a chat‘, however, ‘it doesn’t retain every detail.’Timeline of AI and language models Timeline of AI and language models. AI + IQ testing (human vs AI) AI + IQ testing ...</span></span></sup><sup class=\"trigger\"><a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7#:~:text=Each%20of,technical%20tasks.\">19</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7#:~:text=Each%20of,technical%20tasks.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>19. The Top 5 AI Models of 2025: What’s New and How to Use... | Medium</b></a><span class=\"passage\">Each of these top 5 AI models expected in 2025 brings a unique set of strengths to the table. GPT-4.5 aims to be the all-encompassing intelligent system, unifying various AI capabilities for ease of use and broader applications. Grok 3 stands out for its advanced reasoning and real-time data access through X integration, making it a powerful tool for research and technical tasks.</span></span></sup></li>\n</ul>\n<h2>March 2025</h2>\n<ul>\n<li><strong>Cohere Aya Vision</strong> (March 4): Cohere Labs released Aya Vision, a groundbreaking multimodal model for visual understanding. <sup class=\"trigger\"><a href=\"https://aiflashreport.com/model-releases.html#:~:text=Track%20the,and%20availability.\">14</a><span class=\"tooltip top center citation\"><a href=\"https://aiflashreport.com/model-releases.html#:~:text=Track%20the,and%20availability.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJFoD1sr1AkJsHlePc9uSLDismywaM4jmYPrK44I_zVKo%3D\"> aiflashreport.com</span><b>14. AI Model Release Timeline - AI Flash Report</b></a><span class=\"passage\">Track the latest AI model releases: GPT-4, Claude, Mistral, and more. Comprehensive timeline with specs, performance metrics, and availability.</span></span></sup></li>\n<li><strong>Gemma 3</strong> (March 12): Google released Gemma 3, a collection of lightweight open models with 128K context window, multimodal capabilities (text and image), and support for over 140 languages. <sup class=\"trigger\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\">6</a><span class=\"tooltip top center citation\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJyulldmP2ETUYh6LPtXMsl6zq0TPxWtuvKb1xoaDmm9IYdi8Rkg1hdfsQOywf8-KL\"> timeline.the-blueprint.ai</span><b>6. Generative AI Timeline | The Blueprint</b></a><span class=\"passage\">28 October 2025. Adobe release Firefly Image 5 ; 21 October 2025. OpenAI introduce ChatGPT Atlas browser ; 16 October 2025. Anthropic introduce Agent Skills ; 15 ...</span></span></sup><sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence#:~:text=By%20mid,difficulties%20with\">18</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence#:~:text=By%20mid,difficulties%20with\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>18. Generative artificial intelligence</b></a><span class=\"passage\">By mid 2025, despite continued consumer growth, many companies were increasingly abandoning generative AI pilot projects as they had difficulties with ...</span></span></sup></li>\n<li><strong>Cohere Command A</strong> (March 13): Cohere released Command A, a 111B parameter enterprise-focused model delivering maximum performance on just two GPUs, matching GPT-4o and DeepSeek-V3 on agentic tasks. <sup class=\"trigger\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\">4</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>4. The Latest Generative AI Models in 2025: A Comprehensive Guide</b></a><span class=\"passage\">What started as text-based language models has now exploded into sophisticated multimodal systems that understand and generate text, images, audio, and video, often in real-time. In this blog post, I bring you the most up-to-date and comprehensive roundup of generative AI models released in 2025, across companies like OpenAI, Google DeepMind, Anthropic, Alibaba, and more.As of May 2025, the evolution of generative AI models shows some clear trends: Reasoning Before Responding: Claude 3.7 and ...</span></span></sup><sup class=\"trigger\"><a href=\"https://lifearchitect.ai/timeline/#:~:text=Time%20between,My%20Cheese%3F\">7</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/timeline/#:~:text=Time%20between,My%20Cheese%3F\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>7. Timeline of AI and language models – Dr Alan D. Thompson</b></a><span class=\"passage\">Time between releases of OpenAI’s GPT models. Model. Months since last release.Living documents Alan’s conservative countdown to AGI Alan’s ASI checklist Alan’s technical highlights (new tab) ALPrompt Calculations and conversions Compute Table (new tab) Datasets Table LLM-Reset/LLM-Normalize Models Table Timeline of AI and language models Visualizations The Who Moved My Cheese?</span></span></sup></li>\n<li><strong>Mistral Small 3.1</strong> (March 17): Mistral released Mistral Small 3.1, adding multimodal (image) capabilities and 128K context length while still fitting on a single RTX 4090. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\">9</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>9. Alan’s conservative countdown to AGI – Dr Alan D. Thompson...</b></a><span class=\"passage\">Job listing, archive. Apr/2025. INFO: OpenAI Memory: ‘ChatGPT will reference your past conversations to recall useful information you’ve shared. It uses this to learn about your interests and preferences, helping make future chats more personalized and relevant.’ and ‘You can also teach ChatGPT something new by saying it in a chat‘, however, ‘it doesn’t retain every detail.’Timeline of AI and language models Timeline of AI and language models. AI + IQ testing (human vs AI) AI + IQ testing ...</span></span></sup><sup class=\"trigger\"><a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7#:~:text=Each%20of,technical%20tasks.\">19</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7#:~:text=Each%20of,technical%20tasks.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>19. The Top 5 AI Models of 2025: What’s New and How to Use... | Medium</b></a><span class=\"passage\">Each of these top 5 AI models expected in 2025 brings a unique set of strengths to the table. GPT-4.5 aims to be the all-encompassing intelligent system, unifying various AI capabilities for ease of use and broader applications. Grok 3 stands out for its advanced reasoning and real-time data access through X integration, making it a powerful tool for research and technical tasks.</span></span></sup></li>\n<li><strong>Baidu ERNIE 4.5 & X1</strong> (March 16): Baidu released ERNIE 4.5 foundation model and ERNIE X1 reasoning model, with X1 performing comparably to DeepSeek R1 at half the price. <sup class=\"trigger\"><a href=\"https://llm-stats.com/#:~:text=Comprehensive%20AI,Compare%20performance\">10</a><span class=\"tooltip top center citation\"><a href=\"https://llm-stats.com/#:~:text=Comprehensive%20AI,Compare%20performance\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJh7Pyp4mLUPYsYy1tF3cwU6RlAy6AgEH5vY69V478lNE%3D\"> llm-stats.com</span><b>10. AI Leaderboards 2025 - Compare LLM, TTS, STT, Video ...</b></a><span class=\"passage\">Comprehensive AI leaderboards comparing LLM, text-to-speech, speech-to-text, video generation, image generation, and embedding models. Compare performance ...</span></span></sup></li>\n<li><strong>Gemini 2.5 Pro</strong> (March 25): Google released Gemini 2.5 Pro Experimental, its most intelligent model yet, topping the LMArena leaderboard with strong reasoning and coding capabilities. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. The Evolution of Generative AI: 2018–2025 Timeline... | Medium</b></a><span class=\"passage\"></span></span></sup><sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\">3</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>3. History of artificial intelligence - Wikipedia</b></a><span class=\"passage\">The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT.</span></span></sup></li>\n<li><strong>GPT-4o Image Generation</strong> (March 25): OpenAI launched native image generation capabilities in GPT-4o, enabling photorealistic output, image transformation, and reliable text rendering in images.</li>\n<li><strong>DeepSeek-V3-0324</strong> (March 25): DeepSeek released an updated V3 model with improved performance, now under MIT license. <sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/#:~:text=It%E2%80%99s%20Christmas,OSS%20Models.\">12</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/#:~:text=It%E2%80%99s%20Christmas,OSS%20Models.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>12. New Models From OpenAI, Anthropic, Google – All At The Same Time - Forbes</b></a><span class=\"passage\">It’s Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today’s news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I’ll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models.</span></span></sup></li>\n<li><strong>Runway Gen-4</strong> (March 31): Runway released Gen-4, enabling consistent character, location, and object generation across scenes with coherent world environments. <sup class=\"trigger\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\">15</a><span class=\"tooltip top center citation\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJVsUwoxweF2cHESR7OzfZ3x2uhGK06FCwMyTe0wXYCqE%3D\"> www.qualcomm.com</span><b>15. AI Timeline | Innovations and Advancements</b></a><span class=\"passage\">See how generative AI has evolved, from early attempts to cutting-edge transformers. Uncover the advancements that revolutionized AI.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race#:~:text=Alibaba%20%E2%80%94,recent%20months.\">20</a><span class=\"tooltip top center citation\"><a href=\"https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race#:~:text=Alibaba%20%E2%80%94,recent%20months.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJIbCAMLeXmYi0pTgt21pWSOLmXfFcLQGsSdJ4AXbb3I8%3D\"> www.bloomberg.com</span><b>20. China's Alibaba Rolls Out Latest Flagship Qwen AI Model... - Bloomberg</b></a><span class=\"passage\">Alibaba — which in 2025 declared itself all-in on the AI race — came out with a new model in its Qwen 2.5 series just a few weeks ago that can process text, pictures, audio and video — and is efficient enough to run directly on phones and laptops. It unveiled a new version of its AI assistant Quark app in March. Read More: China Floods the World With AI Models After DeepSeek Success. OpenAI , Google and Anthropic have similarly released a flurry of models in recent months.</span></span></sup></li>\n</ul>\n<h2>April 2025</h2>\n<ul>\n<li><strong>Midjourney V7</strong> (April 3-4): Midjourney released V7, its first new AI image model in nearly a year, featuring improved text and image prompt handling with richer textures and more coherent details. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,to%20date.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,to%20date.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* February 27 – OpenAI announces a research preview of GPT-4.5, its largest and most advanced AI model to date.</span></span></sup></li>\n<li><strong>Llama 4</strong> (April 5): Meta released Llama 4 (Scout, Maverick, and Behemoth variants), natively multimodal models trained on text, image, and video data with up to 1 million token context length. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20January,is%20open-source.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20January,is%20open-source.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. DeepSeek-R1 is open-source.</span></span></sup><sup class=\"trigger\"><a href=\"https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model#:~:text=French%20AI,its%20own.\">21</a><span class=\"tooltip top center citation\"><a href=\"https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model#:~:text=French%20AI,its%20own.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ8u3aqFXgf9E7x15c44BSPbDldkruTjhJYKD4QPFyN5g%3D\"> tribune.net.ph</span><b>21. Mistral awed by DeepSeek’s AI model - tribune.net.ph</b></a><span class=\"passage\">French AI startup Mistral on Thursday hailed Chinese competitor DeepSeek’s R1 model as “great” for the fast-developing sector, while announcing another new release of its own.</span></span></sup></li>\n<li><strong>GPT-4.1</strong> (April 14): OpenAI released GPT-4.1 family (including mini and nano variants) with major gains in coding, instruction following, and long-context understanding up to 1M tokens.</li>\n<li><strong>OpenAI o3 & o4-mini</strong> (April 16): OpenAI released o3 and o4-mini, the smartest reasoning models to date, representing a step change in ChatGPT's capabilities for complex tasks. <sup class=\"trigger\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\">16</a><span class=\"tooltip top center citation\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJmlTltlSNyuFEPES75cI2m9n850JedvMmgOhqRISd0U0%3D\"> www.shakudo.io</span><b>16. Top 9 Large Language Models as of December 2025</b></a><span class=\"passage\">As of late 2024 and mid-2025, DeepSeek has been actively releasing and updating its models, including the DeepSeek V3. ... Mistral AI, a prominent player ...</span></span></sup><sup class=\"trigger\"><a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7#:~:text=Each%20of,technical%20tasks.\">19</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/h7w/the-top-5-ai-models-of-2025-whats-new-and-how-to-use-them-6e31270804d7#:~:text=Each%20of,technical%20tasks.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>19. The Top 5 AI Models of 2025: What’s New and How to Use... | Medium</b></a><span class=\"passage\">Each of these top 5 AI models expected in 2025 brings a unique set of strengths to the table. GPT-4.5 aims to be the all-encompassing intelligent system, unifying various AI capabilities for ease of use and broader applications. Grok 3 stands out for its advanced reasoning and real-time data access through X integration, making it a powerful tool for research and technical tasks.</span></span></sup></li>\n<li><strong>Adobe Firefly Image 4 & Video Model</strong> (April 24): Adobe unveiled Firefly Image Model 4 with unmatched definition and realism, plus the Firefly Video Model for dynamic, commercially safe video creation. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20January,is%20open-source.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20January,is%20open-source.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. DeepSeek-R1 is open-source.</span></span></sup></li>\n<li><strong>Qwen3</strong> (April 28-29): Alibaba released Qwen3, featuring eight models from 0.6B to 235B parameters under Apache 2.0 license, with the flagship MoE model activating 22B of 235B parameters. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models#:~:text=List%20%3B,2025%2C%20Anthropic\">11</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models#:~:text=List%20%3B,2025%2C%20Anthropic\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>11. List of large language models</b></a><span class=\"passage\">List ; Llama 4, April 5, 2025, Meta AI ; OpenAI o3 and o4-mini, April 16, 2025, OpenAI ; Qwen3, April 2025, Alibaba Cloud ; Claude 4, May 22, 2025, Anthropic ...</span></span></sup><sup class=\"trigger\"><a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/#:~:text=The%20years,time%2C%20generative\">22</a><span class=\"tooltip top center citation\"><a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/#:~:text=The%20years,time%2C%20generative\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJxzSrJhO5ebC8Td3gLCbH6EioW2mTecVD0Za1gDbJL2ppO0PRyCkP4xEfoqFMJOe8\"> www.dataversity.net</span><b>22. A Brief History of Generative AI</b></a><span class=\"passage\">The years 2023 through 2025 represent one of the most rapid periods of advancement in the history of generative AI. During this time, generative ...</span></span></sup></li>\n</ul>\n<h2>May 2025</h2>\n<ul>\n<li><strong>AlphaEvolve</strong> (May 14): Google DeepMind announced AlphaEvolve, a Gemini-powered coding agent for designing advanced algorithms. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,is%20open-source.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,is%20open-source.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence</b></a><span class=\"passage\">The following is a list of events of the year 2025 in artificial intelligence, as well as predicted and scheduled events that have not yet occurred.January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.[1] DeepSeek-R1 is open-source.</span></span></sup></li>\n<li><strong>GPT-4.1 in ChatGPT</strong> (May 14): OpenAI made GPT-4.1 available directly in ChatGPT for Plus, Pro, and Team users, excelling at coding and instruction following. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\">9</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/agi/#:~:text=Job%20listing%2C,vs%20AI%29.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>9. Alan’s conservative countdown to AGI – Dr Alan D. Thompson...</b></a><span class=\"passage\">Job listing, archive. Apr/2025. INFO: OpenAI Memory: ‘ChatGPT will reference your past conversations to recall useful information you’ve shared. It uses this to learn about your interests and preferences, helping make future chats more personalized and relevant.’ and ‘You can also teach ChatGPT something new by saying it in a chat‘, however, ‘it doesn’t retain every detail.’Timeline of AI and language models Timeline of AI and language models. AI + IQ testing (human vs AI) AI + IQ testing ...</span></span></sup></li>\n<li><strong>OpenAI Codex</strong> (May 16): OpenAI introduced Codex, a cloud-based coding agent that can navigate repositories, edit files, run commands, and execute tests from a prompt or spec. <sup class=\"trigger\"><a href=\"https://www.bloomberg.com/news/articles/2025-05-07/france-s-mistral-releases-ai-model-tailored-for-businesses#:~:text=Mistral%20is,race%20.\">28</a><span class=\"tooltip top center citation\"><a href=\"https://www.bloomberg.com/news/articles/2025-05-07/france-s-mistral-releases-ai-model-tailored-for-businesses#:~:text=Mistral%20is,race%20.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJIbCAMLeXmYi0pTgt21pWSOLmXfFcLQGsSdJ4AXbb3I8%3D\"> www.bloomberg.com</span><b>28. France’s Mistral Releases AI Model Tailored for... - Bloomberg</b></a><span class=\"passage\">Mistral is releasing a new artificial intelligence model that’s tailored for business use cases, as the French startup tries to capitalize on Europe’s concerns about being overly reliant on technology from Silicon Valley.Mistral, Europe’s leading AI developer, was last valued at €5.8 billion ($6.6 billion) in June 2024, but the company faces mounting competition. Meta Platforms Inc. recently announced a family of models, responding to a flurry of releases out of China from DeepSeek, Alibaba ...</span></span></sup></li>\n<li><strong>Veo 3</strong> (May 20): Google DeepMind released Veo 3, a state-of-the-art video generation model that also creates synchronized audio including dialogue, sound effects, and ambient noise. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. The Evolution of Generative AI: 2018–2025 Timeline... | Medium</b></a><span class=\"passage\"></span></span></sup><sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence#:~:text=The%20training,War%20II.%5B132%5D.\">31</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence#:~:text=The%20training,War%20II.%5B132%5D.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>31. Timeline of artificial intelligence - Wikipedia</b></a><span class=\"passage\">The training computation of notable AI systems through time.2018. Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions.[131]. The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, to stave off a brain drain of talent, along the lines of CERN after World War II.[132].</span></span></sup></li>\n<li><strong>Google Flow</strong> (May 20): Google launched Flow, an AI filmmaking tool built for creatives, powered by Veo, Imagen, and Gemini for creating cinematic clips and scenes. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. 2018–2025 Timeline Revealed! | by Generative AI | Medium</b></a><span class=\"passage\">Artificial Intelligence has seen groundbreaking advancements over the years, with Large Language Models (LLMs) evolving through post-training techniques that enhance their capabilities, alignment,...</span></span></sup><sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence#:~:text=By%20mid,difficulties%20with\">18</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence#:~:text=By%20mid,difficulties%20with\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>18. Generative artificial intelligence</b></a><span class=\"passage\">By mid 2025, despite continued consumer growth, many companies were increasingly abandoning generative AI pilot projects as they had difficulties with ...</span></span></sup></li>\n<li><strong>Imagen 4</strong> (May 20): Google released Imagen 4 at I/O 2025, featuring remarkable clarity in fine details, excellent text rendering, and both photorealistic and abstract style capabilities. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,to%20date.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,to%20date.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* February 27 – OpenAI announces a research preview of GPT-4.5, its largest and most advanced AI model to date.</span></span></sup></li>\n<li><strong>Gemma 3n Preview</strong> (May 20): Google announced Gemma 3n preview, a mobile-first architecture bringing powerful multimodal AI to edge devices with text, image, video, and audio capabilities. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,30%20minutes.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,30%20minutes.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* February 3 – OpenAI releases ChatGPT Deep Research, an artificial intelligence system integrated into ChatGPT, which generates cited reports on a user-specified topic by autonomously browsing the web for 5 to 30 minutes.</span></span></sup></li>\n<li><strong>Claude 4</strong> (May 22): Anthropic released Claude 4 with two models (Opus 4 and Sonnet 4), setting new standards for coding, advanced reasoning, and AI agents, with Opus 4 capable of sustained performance on complex, long-running tasks. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. The Evolution of Generative AI: 2018–2025 Timeline... | Medium</b></a><span class=\"passage\"></span></span></sup><sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\">3</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>3. History of artificial intelligence - Wikipedia</b></a><span class=\"passage\">The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT.</span></span></sup></li>\n<li><strong>Mistral Medium 3</strong> (May 2025): Mistral released Mistral Medium 3, a frontier-class multimodal model for enterprise use. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence#:~:text=By%20mid,difficulties%20with\">18</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/Generative_artificial_intelligence#:~:text=By%20mid,difficulties%20with\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>18. Generative artificial intelligence</b></a><span class=\"passage\">By mid 2025, despite continued consumer growth, many companies were increasingly abandoning generative AI pilot projects as they had difficulties with ...</span></span></sup></li>\n</ul>\n<h2>June 2025</h2>\n<ul>\n<li><strong>Mistral Small 3.2</strong> (June 20): Mistral released Mistral Small 3.2, a minor update to Small 3.1 with continued improvements for local deployment. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,is%20open-source.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,is%20open-source.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence</b></a><span class=\"passage\">The following is a list of events of the year 2025 in artificial intelligence, as well as predicted and scheduled events that have not yet occurred.January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.[1] DeepSeek-R1 is open-source.</span></span></sup></li>\n<li><strong>Gemma 3n Full Release</strong> (June 26): Google officially released Gemma 3n, a next-gen multimodal AI model designed to run on smartphones and edge devices with 2-4B effective parameters while supporting text, image, video, and audio. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,per%20second.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,per%20second.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* February 6 – Mistral AI releases Le Chat, an AI assistant able to answer up to 1,000 words per second.</span></span></sup><sup class=\"trigger\"><a href=\"https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model#:~:text=French%20AI,its%20own.\">21</a><span class=\"tooltip top center citation\"><a href=\"https://tribune.net.ph/2025/01/31/mistral-awed-by-deepseeks-ai-model#:~:text=French%20AI,its%20own.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ8u3aqFXgf9E7x15c44BSPbDldkruTjhJYKD4QPFyN5g%3D\"> tribune.net.ph</span><b>21. Mistral awed by DeepSeek’s AI model - tribune.net.ph</b></a><span class=\"passage\">French AI startup Mistral on Thursday hailed Chinese competitor DeepSeek’s R1 model as “great” for the fast-developing sector, while announcing another new release of its own.</span></span></sup></li>\n<li><strong>Imagen 4 API</strong> (June 25): Google made Imagen 4 and Imagen 4 Ultra available in the Gemini API and Google AI Studio with significantly improved text rendering. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20January,is%20open-source.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20January,is%20open-source.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* January 20 – DeepSeek releases DeepSeek-R1, a large language model based on DeepSeek-V3 utilising a chain-of-thought, stating it achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. DeepSeek-R1 is open-source.</span></span></sup></li>\n</ul>\n<h2>July 2025</h2>\n<ul>\n<li><strong>Baidu ERNIE 4.5 Open Source</strong> (July 1): Baidu open-sourced the ERNIE 4.5 series with ten model variants ranging from 0.3B to 424B parameters. <sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/#:~:text=Closed%20LLMs,among%20others.\">13</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/#:~:text=Closed%20LLMs,among%20others.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>13. Meta Shares Hard Numbers For Llama’s Enterprise AI Value</b></a><span class=\"passage\">Closed LLMs include OpenAI, Google Gemini and Anthropic. The most pervasive open LLM is the Llama model from Meta.Click through to the Meta release if you want to read glowing reviews from Jensen Huang of Nvidia and other partner executives from AWS, Databricks and Groq. The long list of Llama partners also includes Dell Technologies, IBM, Scale AI and Snowflake, among others.</span></span></sup></li>\n<li><strong>Grok 4</strong> (July 9): xAI released Grok 4, with Grok 4 Heavy available for SuperGrok Heavy subscribers for more challenging tasks. <sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/#:~:text=Closed%20LLMs,among%20others.\">13</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/#:~:text=Closed%20LLMs,among%20others.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>13. Meta Shares Hard Numbers For Llama’s Enterprise AI Value</b></a><span class=\"passage\">Closed LLMs include OpenAI, Google Gemini and Anthropic. The most pervasive open LLM is the Llama model from Meta.Click through to the Meta release if you want to read glowing reviews from Jensen Huang of Nvidia and other partner executives from AWS, Databricks and Groq. The long list of Llama partners also includes Dell Technologies, IBM, Scale AI and Snowflake, among others.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/#:~:text=BEIJING%2C%20Jan,surpassed%20the\">24</a><span class=\"tooltip top center citation\"><a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/#:~:text=BEIJING%2C%20Jan,surpassed%20the\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ6_GhYEF_SGKvKrNpbEbdxhCyDbEhyav4hlAGYvShIUE%3D\"> www.reuters.com</span><b>24. Alibaba releases AI model it says surpasses DeepSeek</b></a><span class=\"passage\">BEIJING, Jan 29 (Reuters) - Chinese tech company Alibaba (9988.HK) on Wednesday released a new version of its Qwen 2.5 artificial intelligence model that it claimed surpassed the...</span></span></sup></li>\n<li><strong>Adobe Firefly Video Enhancements</strong> (July 17): Adobe introduced enhanced Firefly Video Model with improved motion fidelity and advanced video controls. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%2027,enhancing%20efficiency.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%2027,enhancing%20efficiency.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* 27 March – Engineers in the fields of electrical and computer science create a groundbreaking AI system known as \"Super-Turing AI,\" designed to function more similarly to the human brain. Unlike conventional models that isolate tasks and shuttle massive volumes of data between components, this innovative AI merges key processes, streamlining operations and enhancing efficiency.</span></span></sup></li>\n<li><strong>ChatGPT Agent Mode</strong> (July 17): OpenAI integrated Operator into ChatGPT as \"agent mode,\" combining deep research with visual browser capabilities. <sup class=\"trigger\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\">15</a><span class=\"tooltip top center citation\"><a href=\"https://www.qualcomm.com/news/onq/2024/02/the-rise-of-generative-ai-timeline-of-breakthrough-innovations#:~:text=See%20how,revolutionized%20AI.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJVsUwoxweF2cHESR7OzfZ3x2uhGK06FCwMyTe0wXYCqE%3D\"> www.qualcomm.com</span><b>15. AI Timeline | Innovations and Advancements</b></a><span class=\"passage\">See how generative AI has evolved, from early attempts to cutting-edge transformers. Uncover the advancements that revolutionized AI.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/#:~:text=BEIJING%2C%20Jan,surpassed%20the\">24</a><span class=\"tooltip top center citation\"><a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/#:~:text=BEIJING%2C%20Jan,surpassed%20the\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ6_GhYEF_SGKvKrNpbEbdxhCyDbEhyav4hlAGYvShIUE%3D\"> www.reuters.com</span><b>24. Alibaba releases AI model it says surpasses DeepSeek</b></a><span class=\"passage\">BEIJING, Jan 29 (Reuters) - Chinese tech company Alibaba (9988.HK) on Wednesday released a new version of its Qwen 2.5 artificial intelligence model that it claimed surpassed the...</span></span></sup></li>\n<li><strong>GLM-4.5</strong> (July 29): Zhipu AI released GLM-4.5, a 355B parameter model (335B and 106B sizes) under MIT license with 22T token training corpus. <sup class=\"trigger\"><a href=\"https://medium.com/@Micheal-Lanham/the-october-2025-llm-revolution-what-you-need-to-know-7828c222c9d2#:~:text=Anthropic%20even,original%20sources.\">32</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@Micheal-Lanham/the-october-2025-llm-revolution-what-you-need-to-know-7828c222c9d2#:~:text=Anthropic%20even,original%20sources.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>32. The October 2025 LLM Revolution: What You Need to Know | Medium</b></a><span class=\"passage\">Anthropic even released a Claude Agent SDK specifically for building complex AI agents with this model.Sources: Information based on official announcements from Anthropic, OpenAI, IBM Research, Meta, Baidu, Liquid AI, and Oracle as of October 2025. All model specifications and benchmarks cited from original sources.</span></span></sup></li>\n</ul>\n<h2>August 2025</h2>\n<ul>\n<li><strong>GPT-OSS</strong> (August 5): OpenAI released GPT-OSS, its first open-source models in 20B and 120B sizes under Apache 2.0 license. <sup class=\"trigger\"><a href=\"https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc#:~:text=The%20AI,Anthropic%20anymore.\">23</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc#:~:text=The%20AI,Anthropic%20anymore.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>23. LLM Frameworks You Can’t Ignore in 2025 (Apple, Meta, Google...)</b></a><span class=\"passage\">The AI space isn’t just moving fast in 2025 — it’s warping forward like a bullet train with no brakes. LLMs (Large Language Models) are evolving rapidly, but what’s truly game-changing this year isn’t just the models — it’s the frameworks that power them. From on-device optimization to real-time agent orchestration, these frameworks are redefining how we build, scale, and deploy intelligence. And the surprising part? It’s not just OpenAI or Anthropic anymore.</span></span></sup></li>\n<li><strong>Claude 4.1</strong> (August 5): Anthropic released Claude 4.1 Opus with enhanced capabilities. <sup class=\"trigger\"><a href=\"https://llm-stats.com/llm-updates#:~:text=Latest%20version,model%20versions.\">17</a><span class=\"tooltip top center citation\"><a href=\"https://llm-stats.com/llm-updates#:~:text=Latest%20version,model%20versions.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJh7Pyp4mLUPYsYy1tF3cwU6RlAy6AgEH5vY69V478lNE%3D\"> llm-stats.com</span><b>17. LLM Updates 2025 | AI Updates & Model Version Changelog</b></a><span class=\"passage\">Latest version releases, API changes, and model improvements from OpenAI, Anthropic, Google, and Meta. Complete changelog for GPT, Claude, Gemini, and Llama model versions.</span></span></sup></li>\n<li><strong>GPT-5</strong> (August 7): OpenAI released GPT-5 in three variants (GPT-5, GPT-5 mini, GPT-5 nano) with integrated thinking abilities, available in ChatGPT and API. <sup class=\"trigger\"><a href=\"https://llm-stats.com/#:~:text=Comprehensive%20AI,Compare%20performance\">10</a><span class=\"tooltip top center citation\"><a href=\"https://llm-stats.com/#:~:text=Comprehensive%20AI,Compare%20performance\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJh7Pyp4mLUPYsYy1tF3cwU6RlAy6AgEH5vY69V478lNE%3D\"> llm-stats.com</span><b>10. AI Leaderboards 2025 - Compare LLM, TTS, STT, Video ...</b></a><span class=\"passage\">Comprehensive AI leaderboards comparing LLM, text-to-speech, speech-to-text, video generation, image generation, and embedding models. Compare performance ...</span></span></sup><sup class=\"trigger\"><a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/#:~:text=See%20how,savvy%20tech\">29</a><span class=\"tooltip top center citation\"><a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/#:~:text=See%20how,savvy%20tech\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJSP0KYg0nLIr378hjubQLCNUBhnKG3IQYXuDa5Twhe24%3D\"> www.aicerts.ai</span><b>29. Evolving LLM Market: Anthropic Leads 2025 Enterprise ...</b></a><span class=\"passage\">See how the LLM Market shifts as Anthropic tops enterprise share, budgets rise, and experts deliver actionable guidance for savvy tech ...</span></span></sup></li>\n<li><strong>Gemma 3 270M</strong> (August 14): Google released Gemma 3 in a compact 270M size for hyper-efficient deployment. <sup class=\"trigger\"><a href=\"https://www.linkedin.com/pulse/generative-ai-2025-timeline-mar%C3%ADa-jos%C3%A9-pel%C3%A1ez-montalvo-0isbe#:~:text=Over%20the,and%20lets\">30</a><span class=\"tooltip top center citation\"><a href=\"https://www.linkedin.com/pulse/generative-ai-2025-timeline-mar%C3%ADa-jos%C3%A9-pel%C3%A1ez-montalvo-0isbe#:~:text=Over%20the,and%20lets\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJfOw4fqS2dA275zgQuECqyyCHsA03oLGZQFNxMcbim_A%3D\"> www.linkedin.com</span><b>30. Generative AI in 2025: A Timeline - LinkedIn</b></a><span class=\"passage\">Over the course of 2025, the conversation around generative AI did not remain stable — its focus, actors, and tone shifted over time. This visualization brings together 745 news items and lets ...</span></span></sup></li>\n<li><strong>Imagen 4 Fast GA</strong> (August 15): Google made Imagen 4 Fast and the full Imagen 4 family generally available in the Gemini API. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,30%20minutes.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%20February,30%20minutes.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* February 3 – OpenAI releases ChatGPT Deep Research, an artificial intelligence system integrated into ChatGPT, which generates cited reports on a user-specified topic by autonomously browsing the web for 5 to 30 minutes.</span></span></sup></li>\n<li><strong>DeepSeek-V3.1</strong> (August 21): DeepSeek released V3.1, a 671B parameter hybrid model that can switch between thinking and non-thinking modes, trained on 15.6T tokens. <sup class=\"trigger\"><a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/#:~:text=See%20how,savvy%20tech\">29</a><span class=\"tooltip top center citation\"><a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/#:~:text=See%20how,savvy%20tech\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJSP0KYg0nLIr378hjubQLCNUBhnKG3IQYXuDa5Twhe24%3D\"> www.aicerts.ai</span><b>29. Evolving LLM Market: Anthropic Leads 2025 Enterprise ...</b></a><span class=\"passage\">See how the LLM Market shifts as Anthropic tops enterprise share, budgets rise, and experts deliver actionable guidance for savvy tech ...</span></span></sup><sup class=\"trigger\"><a href=\"https://aitechcentre.com/openai-vs-anthropic-claude-google-gemini-and-mistral/#:~:text=Competitors%20like,AI%20narrative.\">34</a><span class=\"tooltip top center citation\"><a href=\"https://aitechcentre.com/openai-vs-anthropic-claude-google-gemini-and-mistral/#:~:text=Competitors%20like,AI%20narrative.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJVhaf5EC3KLauscUPucwb2HaUwqTjueB3gtr-LJ-l5IA%3D\"> aitechcentre.com</span><b>34. OpenAI vs. Anthropic Claude, Google Gemini, and Mistral</b></a><span class=\"passage\">Competitors like Meta (with Llama 3) and others also play into industry dynamics, but in 2025 the quartet of OpenAI, Anthropic, Google, and open-model startups like Mistral define much of the AI narrative.</span></span></sup></li>\n<li><strong>Mistral Medium 3.1</strong> (August 2025): Mistral released Mistral Medium 3.1, a frontier-class multimodal model. <sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/#:~:text=Closed%20LLMs,among%20others.\">13</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/patrickmoorhead/2024/08/29/meta-shares-hard-numbers-for-llamas-enterprise-ai-value/#:~:text=Closed%20LLMs,among%20others.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>13. Meta Shares Hard Numbers For Llama’s Enterprise AI Value</b></a><span class=\"passage\">Closed LLMs include OpenAI, Google Gemini and Anthropic. The most pervasive open LLM is the Llama model from Meta.Click through to the Meta release if you want to read glowing reviews from Jensen Huang of Nvidia and other partner executives from AWS, Databricks and Groq. The long list of Llama partners also includes Dell Technologies, IBM, Scale AI and Snowflake, among others.</span></span></sup></li>\n<li><strong>Nano Banana (Gemini 2.5 Flash Image)</strong> (August 26): Google launched Nano Banana, a state-of-the-art image generation and editing model integrated into the Gemini app. <sup class=\"trigger\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\">16</a><span class=\"tooltip top center citation\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJmlTltlSNyuFEPES75cI2m9n850JedvMmgOhqRISd0U0%3D\"> www.shakudo.io</span><b>16. Top 9 Large Language Models as of December 2025</b></a><span class=\"passage\">As of late 2024 and mid-2025, DeepSeek has been actively releasing and updating its models, including the DeepSeek V3. ... Mistral AI, a prominent player ...</span></span></sup><sup class=\"trigger\"><a href=\"https://www.straitstimes.com/asia/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3#:~:text=BEIJING%20-,acclaimed%20DeepSeek-V3.\">33</a><span class=\"tooltip top center citation\"><a href=\"https://www.straitstimes.com/asia/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3#:~:text=BEIJING%20-,acclaimed%20DeepSeek-V3.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJK1zeREBBeDCGx9HcPwF2LRKU2ch4rc9oPX9WYArsb1PBPlvmQq3eALnzSl_2EBHR\"> www.straitstimes.com</span><b>33. Alibaba releases AI model it claims surpasses DeepSeek-V3</b></a><span class=\"passage\">BEIJING - Chinese tech company Alibaba on Jan 29 released a new version of its Qwen 2.5 artificial intelligence (AI) model that it claimed surpassed the highly acclaimed DeepSeek-V3.</span></span></sup></li>\n<li><strong>YandexGPT 5.1 Pro</strong> (August 28): Yandex released YandexGPT 5.1 Pro for the Alice chatbot. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models#:~:text=List%20%3B,2025%2C%20Anthropic\">11</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models#:~:text=List%20%3B,2025%2C%20Anthropic\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>11. List of large language models</b></a><span class=\"passage\">List ; Llama 4, April 5, 2025, Meta AI ; OpenAI o3 and o4-mini, April 16, 2025, OpenAI ; Qwen3, April 2025, Alibaba Cloud ; Claude 4, May 22, 2025, Anthropic ...</span></span></sup></li>\n</ul>\n<h2>September 2025</h2>\n<ul>\n<li><strong>Apertus</strong> (September 2): ETH Zurich and EPF Lausanne released Apertus, a 70B parameter model claimed to be the first LLM compliant with the EU's AI Act, under Apache 2.0 license. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. 2018–2025 Timeline Revealed! | by Generative AI | Medium</b></a><span class=\"passage\">Artificial Intelligence has seen groundbreaking advancements over the years, with Large Language Models (LLMs) evolving through post-training techniques that enhance their capabilities, alignment,...</span></span></sup></li>\n<li><strong>OpenAI Codex Upgrades</strong> (September 15): OpenAI released major upgrades to Codex, making it faster, more reliable, and better at real-time collaboration. <sup class=\"trigger\"><a href=\"https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race#:~:text=Alibaba%20%E2%80%94,recent%20months.\">20</a><span class=\"tooltip top center citation\"><a href=\"https://www.bloomberg.com/news/articles/2025-04-28/alibaba-rolls-out-latest-flagship-ai-model-in-post-deepseek-race#:~:text=Alibaba%20%E2%80%94,recent%20months.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJIbCAMLeXmYi0pTgt21pWSOLmXfFcLQGsSdJ4AXbb3I8%3D\"> www.bloomberg.com</span><b>20. China's Alibaba Rolls Out Latest Flagship Qwen AI Model... - Bloomberg</b></a><span class=\"passage\">Alibaba — which in 2025 declared itself all-in on the AI race — came out with a new model in its Qwen 2.5 series just a few weeks ago that can process text, pictures, audio and video — and is efficient enough to run directly on phones and laptops. It unveiled a new version of its AI assistant Quark app in March. Read More: China Floods the World With AI Models After DeepSeek Success. OpenAI , Google and Anthropic have similarly released a flurry of models in recent months.</span></span></sup></li>\n<li><strong>Qwen3-Omni</strong> (September 22): Alibaba released Qwen3-Omni, a multimodal model processing text, images, audio, and video with real-time streaming responses in text and natural speech. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models#:~:text=List%20%3B,2025%2C%20Anthropic\">11</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/List_of_large_language_models#:~:text=List%20%3B,2025%2C%20Anthropic\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>11. List of large language models</b></a><span class=\"passage\">List ; Llama 4, April 5, 2025, Meta AI ; OpenAI o3 and o4-mini, April 16, 2025, OpenAI ; Qwen3, April 2025, Alibaba Cloud ; Claude 4, May 22, 2025, Anthropic ...</span></span></sup></li>\n<li><strong>Claude 4.5 Sonnet</strong> (September 29): Anthropic released Claude 4.5 Sonnet with enhanced capabilities. <sup class=\"trigger\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\">6</a><span class=\"tooltip top center citation\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJyulldmP2ETUYh6LPtXMsl6zq0TPxWtuvKb1xoaDmm9IYdi8Rkg1hdfsQOywf8-KL\"> timeline.the-blueprint.ai</span><b>6. Generative AI Timeline | The Blueprint</b></a><span class=\"passage\">28 October 2025. Adobe release Firefly Image 5 ; 21 October 2025. OpenAI introduce ChatGPT Atlas browser ; 16 October 2025. Anthropic introduce Agent Skills ; 15 ...</span></span></sup></li>\n<li><strong>Claude Code 2.0 & Claude Agent SDK</strong> (September 29): Anthropic released Claude Code 2.0 with checkpoints, native VS Code extension, and the renamed Claude Agent SDK for building complex AI agents. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/timeline/#:~:text=Time%20between,My%20Cheese%3F\">7</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/timeline/#:~:text=Time%20between,My%20Cheese%3F\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>7. Timeline of AI and language models – Dr Alan D. Thompson</b></a><span class=\"passage\">Time between releases of OpenAI’s GPT models. Model. Months since last release.Living documents Alan’s conservative countdown to AGI Alan’s ASI checklist Alan’s technical highlights (new tab) ALPrompt Calculations and conversions Compute Table (new tab) Datasets Table LLM-Reset/LLM-Normalize Models Table Timeline of AI and language models Visualizations The Who Moved My Cheese?</span></span></sup><sup class=\"trigger\"><a href=\"https://www.aibase.com/ai-timeline/2025#:~:text=Explore%20the,AI%20chronicle.\">26</a><span class=\"tooltip top center citation\"><a href=\"https://www.aibase.com/ai-timeline/2025#:~:text=Explore%20the,AI%20chronicle.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ0-NNw-9x3y6pt8l_-EsIclAaMgCVT2PvNZTajToQl0g%3D\"> www.aibase.com</span><b>26. 2025 Artificial Intelligence (AI) Events Timeline</b></a><span class=\"passage\">Explore the comprehensive timeline of 2025's artificial intelligence milestones, including major breakthroughs, product launches, and industry developments. Stay updated with the most authoritative AI chronicle.</span></span></sup></li>\n<li><strong>DeepSeek-V3.2-Exp</strong> (September 29): DeepSeek released an experimental V3.2 model with DeepSeek Sparse Attention (DSA) mechanism. <sup class=\"trigger\"><a href=\"https://www.timesofai.com/news/roundup-of-ai-model-releases-in-2025/#:~:text=The%20AI,advancing%20together.\">35</a><span class=\"tooltip top center citation\"><a href=\"https://www.timesofai.com/news/roundup-of-ai-model-releases-in-2025/#:~:text=The%20AI,advancing%20together.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJZtLQBdzE70G7j3U2UXQ0KkRgyOThmj1zJIW9KPMtSA8%3D\"> www.timesofai.com</span><b>35. AI Model Releases in 2025: The Roundup of AI Launches</b></a><span class=\"passage\">The AI model roundup of 2025 clearly indicates the transition from singular breakthroughs to an intelligent, integrated, and ready-for-production era. The multimodal reasoning, specialized coding models, open ecosystems, and 3D understanding are all advancing together.</span></span></sup></li>\n<li><strong>GLM-4.6</strong> (September 30): Zhipu AI released GLM-4.6, a 357B parameter model under Apache 2.0 license. <sup class=\"trigger\"><a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/#:~:text=The%20years,time%2C%20generative\">22</a><span class=\"tooltip top center citation\"><a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/#:~:text=The%20years,time%2C%20generative\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJxzSrJhO5ebC8Td3gLCbH6EioW2mTecVD0Za1gDbJL2ppO0PRyCkP4xEfoqFMJOe8\"> www.dataversity.net</span><b>22. A Brief History of Generative AI</b></a><span class=\"passage\">The years 2023 through 2025 represent one of the most rapid periods of advancement in the history of generative AI. During this time, generative ...</span></span></sup></li>\n<li><strong>Sora 2</strong> (September 30): OpenAI released Sora 2, its flagship video and audio generation model with advanced world simulation capabilities, improved physics, instruction-following, and synchronized audio. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\">3</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>3. History of artificial intelligence - Wikipedia</b></a><span class=\"passage\">The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT.</span></span></sup><sup class=\"trigger\"><a href=\"https://llm-stats.com/#:~:text=Comprehensive%20AI,Compare%20performance\">10</a><span class=\"tooltip top center citation\"><a href=\"https://llm-stats.com/#:~:text=Comprehensive%20AI,Compare%20performance\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJh7Pyp4mLUPYsYy1tF3cwU6RlAy6AgEH5vY69V478lNE%3D\"> llm-stats.com</span><b>10. AI Leaderboards 2025 - Compare LLM, TTS, STT, Video ...</b></a><span class=\"passage\">Comprehensive AI leaderboards comparing LLM, text-to-speech, speech-to-text, video generation, image generation, and embedding models. Compare performance ...</span></span></sup></li>\n</ul>\n<h2>October 2025</h2>\n<ul>\n<li><strong>Veo 3.1</strong> (October 15): Google released Veo 3.1 with Scene Extension, First-Last Frame transitions, and Reference to Video features for longer, more coherent cinematic sequences. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,yet%20occurred.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,yet%20occurred.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">The following is a list of events of the year 2025 in artificial intelligence, as well as predicted and scheduled events that have not yet occurred.</span></span></sup><sup class=\"trigger\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\">6</a><span class=\"tooltip top center citation\"><a href=\"https://timeline.the-blueprint.ai/#:~:text=28%20October,%3B%2015\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJyulldmP2ETUYh6LPtXMsl6zq0TPxWtuvKb1xoaDmm9IYdi8Rkg1hdfsQOywf8-KL\"> timeline.the-blueprint.ai</span><b>6. Generative AI Timeline | The Blueprint</b></a><span class=\"passage\">28 October 2025. Adobe release Firefly Image 5 ; 21 October 2025. OpenAI introduce ChatGPT Atlas browser ; 16 October 2025. Anthropic introduce Agent Skills ; 15 ...</span></span></sup></li>\n<li><strong>Anthropic Agent Skills</strong> (October 16): Anthropic introduced Agent Skills for extending Claude's capabilities with task-specific workflows. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. 2018–2025 Timeline Revealed! | by Generative AI | Medium</b></a><span class=\"passage\">Artificial Intelligence has seen groundbreaking advancements over the years, with Large Language Models (LLMs) evolving through post-training techniques that enhance their capabilities, alignment,...</span></span></sup></li>\n<li><strong>ChatGPT Atlas Browser</strong> (October 21): OpenAI introduced the Atlas browser feature in ChatGPT. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. 2018–2025 Timeline Revealed! | by Generative AI | Medium</b></a><span class=\"passage\">Artificial Intelligence has seen groundbreaking advancements over the years, with Large Language Models (LLMs) evolving through post-training techniques that enhance their capabilities, alignment,...</span></span></sup></li>\n<li><strong>Adobe Firefly Image 5</strong> (October 28): Adobe released Firefly Image Model 5 for photorealism and prompt-based editing, along with new partner models from ElevenLabs, Google, and Topaz Labs. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%2016,and%20o4-mini.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=%2A%2016,and%20o4-mini.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">* 16 April – OpenAI announces the launch of two new AI models, o3 and o4-mini.</span></span></sup></li>\n<li><strong>Alice AI LLM 1.0</strong> (October 28): Yandex released Alice AI LLM 1.0 for the Alice AI chatbot. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,yet%20occurred.\">1</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/2025_in_artificial_intelligence#:~:text=The%20following,yet%20occurred.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>1. 2025 in artificial intelligence - Wikipedia</b></a><span class=\"passage\">The following is a list of events of the year 2025 in artificial intelligence, as well as predicted and scheduled events that have not yet occurred.</span></span></sup></li>\n</ul>\n<h2>November 2025</h2>\n<ul>\n<li><strong>ERNIE-4.5-VL-28B-A3B-Thinking</strong> (November 11): Baidu released a multimodal reasoning model achieving SOTA performance while activating only 3B parameters. <sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/#:~:text=It%E2%80%99s%20Christmas,OSS%20Models.\">12</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/#:~:text=It%E2%80%99s%20Christmas,OSS%20Models.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>12. New Models From OpenAI, Anthropic, Google – All At The Same Time - Forbes</b></a><span class=\"passage\">It’s Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today’s news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I’ll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models.</span></span></sup></li>\n<li><strong>ERNIE 5.0</strong> (November 13): Baidu unveiled ERNIE 5.0, a natively omni-modal foundation model jointly modeling text, images, audio, and video at Baidu World 2025. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\">3</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>3. History of artificial intelligence - Wikipedia</b></a><span class=\"passage\">The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/#:~:text=See%20how,savvy%20tech\">29</a><span class=\"tooltip top center citation\"><a href=\"https://www.aicerts.ai/news/evolving-llm-market-anthropic-leads-2025-enterprise-share/#:~:text=See%20how,savvy%20tech\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJSP0KYg0nLIr378hjubQLCNUBhnKG3IQYXuDa5Twhe24%3D\"> www.aicerts.ai</span><b>29. Evolving LLM Market: Anthropic Leads 2025 Enterprise ...</b></a><span class=\"passage\">See how the LLM Market shifts as Anthropic tops enterprise share, budgets rise, and experts deliver actionable guidance for savvy tech ...</span></span></sup></li>\n<li><strong>Nano Banana Pro</strong> (November 20): Google released Nano Banana Pro, built on Gemini 3 Pro, offering unprecedented image generation accuracy with legible text in multiple languages. <sup class=\"trigger\"><a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html#:~:text=Artificial%20intelligence,and%20DeepSeek.\">8</a><span class=\"tooltip top center citation\"><a href=\"https://www.cnbc.com/2025/12/02/mistral-unveils-new-ai-models-in-bid-to-compete-with-openai-google.html#:~:text=Artificial%20intelligence,and%20DeepSeek.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ2QLXvGCU8S3z9xamAa4vPpIb_FiSmI5IPPHyynQHBb0%3D\"> www.cnbc.com</span><b>8. French AI lab Mistral releases new AI models as it looks to keep pace ...</b></a><span class=\"passage\">Artificial intelligence startup Mistral released a new suite of models Tuesday as it looks to keep pace with leading AI labs Google, OpenAI and DeepSeek.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/#:~:text=BEIJING%2C%20Jan,surpassed%20the\">24</a><span class=\"tooltip top center citation\"><a href=\"https://www.reuters.com/technology/artificial-intelligence/alibaba-releases-ai-model-it-claims-surpasses-deepseek-v3-2025-01-29/#:~:text=BEIJING%2C%20Jan,surpassed%20the\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ6_GhYEF_SGKvKrNpbEbdxhCyDbEhyav4hlAGYvShIUE%3D\"> www.reuters.com</span><b>24. Alibaba releases AI model it says surpasses DeepSeek</b></a><span class=\"passage\">BEIJING, Jan 29 (Reuters) - Chinese tech company Alibaba (9988.HK) on Wednesday released a new version of its Qwen 2.5 artificial intelligence model that it claimed surpassed the...</span></span></sup></li>\n<li><strong>Claude Opus 4.5</strong> (November 2025): Anthropic released Claude Opus 4.5, described as the best model in the world for coding, agents, and computer use. <sup class=\"trigger\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\">4</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@jyotishmandas85p/the-latest-generative-ai-models-in-2025-a-comprehensive-guide-58f7dcb9f8f3#:~:text=What%20started,improve%20correctness.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>4. The Latest Generative AI Models in 2025: A Comprehensive Guide</b></a><span class=\"passage\">What started as text-based language models has now exploded into sophisticated multimodal systems that understand and generate text, images, audio, and video, often in real-time. In this blog post, I bring you the most up-to-date and comprehensive roundup of generative AI models released in 2025, across companies like OpenAI, Google DeepMind, Anthropic, Alibaba, and more.As of May 2025, the evolution of generative AI models shows some clear trends: Reasoning Before Responding: Claude 3.7 and ...</span></span></sup><sup class=\"trigger\"><a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/#:~:text=The%20years,time%2C%20generative\">22</a><span class=\"tooltip top center citation\"><a href=\"https://www.dataversity.net/articles/a-brief-history-of-generative-ai/#:~:text=The%20years,time%2C%20generative\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJxzSrJhO5ebC8Td3gLCbH6EioW2mTecVD0Za1gDbJL2ppO0PRyCkP4xEfoqFMJOe8\"> www.dataversity.net</span><b>22. A Brief History of Generative AI</b></a><span class=\"passage\">The years 2023 through 2025 represent one of the most rapid periods of advancement in the history of generative AI. During this time, generative ...</span></span></sup></li>\n</ul>\n<h2>December 2025</h2>\n<ul>\n<li><strong>DeepSeek-V3.2</strong> (December 1): DeepSeek released V3.2 with further enhanced code generation capabilities, achieving 89% on HumanEval and 41% on LiveCodeBench. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\">3</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>3. History of artificial intelligence - Wikipedia</b></a><span class=\"passage\">The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/#:~:text=Why%20would,its%20shareholders.\">25</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/#:~:text=Why%20would,its%20shareholders.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>25. 2025 will be a huge year for the field of artificial intelligence.</b></a><span class=\"passage\">Why would Meta make this strategic pivot? Keeping up with the LLM frontier is incredibly expensive. Meta will need to invest many billions of dollars every year if it wants Llama to stay at or near parity with the latest frontier models from OpenAI, Anthropic and others. Meta is one of the world’s largest and most deep-pocketed companies. But it is also a publicly traded company that is ultimately answerable to its shareholders.</span></span></sup></li>\n<li><strong>Runway Gen-4.5</strong> (December 1): Runway released Gen-4.5, pushing the frontier of video generation with unprecedented visual fidelity and creative control. <sup class=\"trigger\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\">16</a><span class=\"tooltip top center citation\"><a href=\"https://www.shakudo.io/blog/top-9-large-language-models#:~:text=As%20of,DeepSeek%20V3.&text=Mistral%20AI%2C,prominent%20player\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJmlTltlSNyuFEPES75cI2m9n850JedvMmgOhqRISd0U0%3D\"> www.shakudo.io</span><b>16. Top 9 Large Language Models as of December 2025</b></a><span class=\"passage\">As of late 2024 and mid-2025, DeepSeek has been actively releasing and updating its models, including the DeepSeek V3. ... Mistral AI, a prominent player ...</span></span></sup></li>\n<li><strong>Mistral 3 (Large 3, Ministral 3)</strong> (December 2): Mistral released the Mistral 3 family including Large 3 (675B total, 41B active parameters MoE) and Ministral 3 models with 256K context and multimodal capabilities. <sup class=\"trigger\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\">3</a><span class=\"tooltip top center citation\"><a href=\"https://en.wikipedia.org/wiki/History_of_artificial_intelligence#:~:text=The%20transformer,like%20ChatGPT.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJuWyqBKj8MT4ZBZYnSH4uzuNLNQdbiyz2-gUDYI6hGs0%3D\"> en.wikipedia.org</span><b>3. History of artificial intelligence - Wikipedia</b></a><span class=\"passage\">The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases. Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models (LLMs) like ChatGPT.</span></span></sup><sup class=\"trigger\"><a href=\"https://llm-stats.com/llm-updates#:~:text=Latest%20version,model%20versions.\">17</a><span class=\"tooltip top center citation\"><a href=\"https://llm-stats.com/llm-updates#:~:text=Latest%20version,model%20versions.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJh7Pyp4mLUPYsYy1tF3cwU6RlAy6AgEH5vY69V478lNE%3D\"> llm-stats.com</span><b>17. LLM Updates 2025 | AI Updates & Model Version Changelog</b></a><span class=\"passage\">Latest version releases, API changes, and model improvements from OpenAI, Anthropic, Google, and Meta. Complete changelog for GPT, Claude, Gemini, and Llama model versions.</span></span></sup></li>\n<li><strong>Anthropic MCP Donation</strong> (December 9): Anthropic donated the Model Context Protocol to the newly established Agentic AI Foundation for open governance. <sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/#:~:text=It%E2%80%99s%20Christmas,OSS%20Models.\">12</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/johnwerner/2025/08/06/new-models-from-openai-anthropic-google--all-at-the-same-time/#:~:text=It%E2%80%99s%20Christmas,OSS%20Models.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>12. New Models From OpenAI, Anthropic, Google – All At The Same Time - Forbes</b></a><span class=\"passage\">It’s Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today’s news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I’ll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models.</span></span></sup></li>\n<li><strong>Devstral 2</strong> (December 9): Mistral released Devstral 2, a next-generation coding model family in 123B size, alongside the Mistral Vibe CLI. <sup class=\"trigger\"><a href=\"https://www.aibase.com/ai-timeline/2025#:~:text=Explore%20the,AI%20chronicle.\">26</a><span class=\"tooltip top center citation\"><a href=\"https://www.aibase.com/ai-timeline/2025#:~:text=Explore%20the,AI%20chronicle.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ0-NNw-9x3y6pt8l_-EsIclAaMgCVT2PvNZTajToQl0g%3D\"> www.aibase.com</span><b>26. 2025 Artificial Intelligence (AI) Events Timeline</b></a><span class=\"passage\">Explore the comprehensive timeline of 2025's artificial intelligence milestones, including major breakthroughs, product launches, and industry developments. Stay updated with the most authoritative AI chronicle.</span></span></sup></li>\n<li><strong>GPT-5.2</strong> (December 11): OpenAI released GPT-5.2, capable of solving open problems in statistical learning theory that had remained unresolved by human researchers. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\">5</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>5. Inside language models (from GPT to Nova) – Dr Alan D. Thompson...</b></a><span class=\"passage\">Language model sizes Summary of current models Count of LLMs released per month (2024) Compute Context windows Achievements unlocked: Emergent abilities of LLMs Large language models: API or on-premise Increasing dataset sizes 2018-2025 GPT-3’s top 10 datasets by domain/source Contents of GPT-3 & the Pile v1 Contents of Chinese models Language model sizes & predictions [...]What’s in my AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, ...</span></span></sup><sup class=\"trigger\"><a href=\"https://www.aibase.com/ai-timeline/2025#:~:text=Explore%20the,AI%20chronicle.\">26</a><span class=\"tooltip top center citation\"><a href=\"https://www.aibase.com/ai-timeline/2025#:~:text=Explore%20the,AI%20chronicle.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJ0-NNw-9x3y6pt8l_-EsIclAaMgCVT2PvNZTajToQl0g%3D\"> www.aibase.com</span><b>26. 2025 Artificial Intelligence (AI) Events Timeline</b></a><span class=\"passage\">Explore the comprehensive timeline of 2025's artificial intelligence milestones, including major breakthroughs, product launches, and industry developments. Stay updated with the most authoritative AI chronicle.</span></span></sup></li>\n<li><strong>ChatGPT Images Upgrade</strong> (December 16): OpenAI released an upgraded ChatGPT Images model (GPT-Image-1.5) with more precise edits, consistent details, and 4x faster generation. <sup class=\"trigger\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\">5</a><span class=\"tooltip top center citation\"><a href=\"https://lifearchitect.ai/models/#:~:text=Language%20model,predictions%20%5B&text=%5DWhat%E2%80%99s%20in,and%20Gopher.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJTBUU_sXEodO6V3hy84Cml2-XdaC0sGOw0wDbWSkZJSw%3D\"> lifearchitect.ai</span><b>5. Inside language models (from GPT to Nova) – Dr Alan D. Thompson...</b></a><span class=\"passage\">Language model sizes Summary of current models Count of LLMs released per month (2024) Compute Context windows Achievements unlocked: Emergent abilities of LLMs Large language models: API or on-premise Increasing dataset sizes 2018-2025 GPT-3’s top 10 datasets by domain/source Contents of GPT-3 & the Pile v1 Contents of Chinese models Language model sizes & predictions [...]What’s in my AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, ...</span></span></sup></li>\n<li><strong>Mistral OCR 3</strong> (December 17): Mistral released Mistral OCR 3, achieving new frontiers in document processing accuracy and efficiency. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. 2018–2025 Timeline Revealed! | by Generative AI | Medium</b></a><span class=\"passage\">Artificial Intelligence has seen groundbreaking advancements over the years, with Large Language Models (LLMs) evolving through post-training techniques that enhance their capabilities, alignment,...</span></span></sup></li>\n<li><strong>Gemini 3 Flash</strong> (December 17): Google released Gemini 3 Flash, offering Pro-grade reasoning at Flash-level speed, becoming the default model in the Gemini app and AI Mode in Search. <sup class=\"trigger\"><a href=\"https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc#:~:text=The%20AI,Anthropic%20anymore.\">23</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@hadiyolworld007/llm-frameworks-you-cant-ignore-in-2025-apple-meta-google-more-939ae36010dc#:~:text=The%20AI,Anthropic%20anymore.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>23. LLM Frameworks You Can’t Ignore in 2025 (Apple, Meta, Google...)</b></a><span class=\"passage\">The AI space isn’t just moving fast in 2025 — it’s warping forward like a bullet train with no brakes. LLMs (Large Language Models) are evolving rapidly, but what’s truly game-changing this year isn’t just the models — it’s the frameworks that power them. From on-device optimization to real-time agent orchestration, these frameworks are redefining how we build, scale, and deploy intelligence. And the surprising part? It’s not just OpenAI or Anthropic anymore.</span></span></sup><sup class=\"trigger\"><a href=\"https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/#:~:text=Why%20would,its%20shareholders.\">25</a><span class=\"tooltip top center citation\"><a href=\"https://www.forbes.com/sites/robtoews/2024/12/22/10-ai-predictions-for-2025/#:~:text=Why%20would,its%20shareholders.\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJb3-qX0Ki_cnfl0lYF7DUAW3ojYuPzodQvLR2LLiScoY%3D\"> www.forbes.com</span><b>25. 2025 will be a huge year for the field of artificial intelligence.</b></a><span class=\"passage\">Why would Meta make this strategic pivot? Keeping up with the LLM frontier is incredibly expensive. Meta will need to invest many billions of dollars every year if it wants Llama to stay at or near parity with the latest frontier models from OpenAI, Anthropic and others. Meta is one of the world’s largest and most deep-pocketed companies. But it is also a publicly traded company that is ultimately answerable to its shareholders.</span></span></sup></li>\n<li><strong>Seed1.8</strong> (December 18): ByteDance released Seed1.8, a model designed for generalized real-world agency supporting text and image inputs. <sup class=\"trigger\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\">2</a><span class=\"tooltip top center citation\"><a href=\"https://medium.com/@genai.works/the-evolution-of-generative-ai-2018-2025-timeline-revealed-ff1cf4d46fcc#:~:text=Artificial%20Intelligence,capabilities%2C%20alignment%2C\"><span class=\"source\"><img alt=\"\" src=\"https://p.kagi.com/proxy/favicons?c=lQ30-cxV_E2Vdyqb6hrbuYgId7qu7hecjO7o1T7aqQDFc7XpcLttL1Ryala-GNHJBGQH74Q81gRS6YwzivL6uub9-AAfvy0RHe-vtiCjq5U%3D\"> medium.com</span><b>2. 2018–2025 Timeline Revealed! | by Generative AI | Medium</b></a><span class=\"passage\">Artificial Intelligence has seen groundbreaking advancements over the years, with Large Language Models (LLMs) evolving through post-training techniques that enhance their capabilities, alignment,...</span></span></sup></li>\n</ul>\n<hr>\n<p><em>Note: This timeline captures major publicly announced releases. Some dates reflect initial announcements or previews rather than general availability. Minor updates and regional rollouts are not exhaustively listed.</em></p>","md":"# The 2025 Generative AI Release Timeline\n\n## January 2025\n\n- **DeepSeek-R1** (January 20): DeepSeek released DeepSeek-R1, an open-source reasoning model based on DeepSeek-V3 that achieves performance comparable to OpenAI o1 across math, code, and reasoning tasks using chain-of-thought methodology. \n- **Doubao Realtime Voice Model** (January 20): ByteDance launched the Doubao Realtime Voice Model, an integrated voice understanding and generation model enabling end-to-end speech dialogues on the Doubao app. [^6]\n- **OpenAI Operator** (January 23): OpenAI launched Operator, a general-purpose AI agent powered by the Computer-Using Agent (CUA) model that can autonomously control a web browser to perform tasks like clicking, typing, and scrolling. [^8][^15]\n- **Qwen 2.5 Max** (January 25): Alibaba released Qwen2.5-Max, a large-scale MoE model that surpassed DeepSeek-V3 and competed with leading proprietary models across various benchmarks. [^1][^16]\n- **Mistral Small 3** (January 30): Mistral AI released Mistral Small 3, a latency-optimized 24B-parameter open-source model under Apache 2.0 license, competitive with Llama 3.3 70B while being 3x faster. \n- **Gemini 2.0 Flash** (January 30): Google released Gemini 2.0 Flash as the new default model in the Gemini app, featuring superior speed, built-in tool use, multimodal generation, and a 1M token context window. [^4]\n- **OpenAI o3-mini** (January 31): OpenAI released o3-mini, a cost-efficient reasoning model with three reasoning effort levels (low, medium, high), available to all ChatGPT users including free-tier. [^8]\n\n## February 2025\n\n- **ChatGPT Deep Research** (February 3): OpenAI released Deep Research, an AI system integrated into ChatGPT that autonomously browses the web for 5-30 minutes to generate comprehensive cited reports on user-specified topics. [^7]\n- **Gemini 2.0 Pro** (February 5): Google released Gemini 2.0 Pro alongside the GA version of Gemini 2.0 Flash, offering enhanced quality and capabilities for the agentic era. [^4][^9]\n- **Mistral Le Chat** (February 6): Mistral AI released Le Chat, an AI assistant capable of generating responses at up to 1,000 words per second. [^5]\n- **Imagen 3 API** (February 6): Google made Imagen 3, its state-of-the-art image generation model, available through the Gemini API for developers. [^1]\n- **Grok 3** (February 17-19): xAI released Grok 3, claiming it to be \"the smartest AI on Earth\" with 10x the compute of previous models, achieving 93.3% on AIME 2025 and 84.6% on GPQA benchmarks. [^15][^27]\n- **Claude 3.7 Sonnet & Claude Code** (February 24): Anthropic released Claude 3.7 Sonnet, the first hybrid reasoning model with extended thinking capabilities, alongside Claude Code, an agentic coding tool for terminal-based development. [^5][^14]\n- **YandexGPT 5** (February 25): Yandex released YandexGPT 5 Lite Pretrain and Pro versions for use in the Alice Neural Network chatbot. [^4]\n- **Gemini 2.0 Flash-Lite** (February 25): Google released Gemini 2.0 Flash-Lite, a lighter version optimized for speed and efficiency. [^5]\n- **GPT-4.5** (February 27): OpenAI announced GPT-4.5 as a research preview, its largest non-reasoning model to date, later reported to be indistinguishable from humans in text conversations. [^9][^19]\n\n## March 2025\n\n- **Cohere Aya Vision** (March 4): Cohere Labs released Aya Vision, a groundbreaking multimodal model for visual understanding. [^14]\n- **Gemma 3** (March 12): Google released Gemma 3, a collection of lightweight open models with 128K context window, multimodal capabilities (text and image), and support for over 140 languages. [^6][^18]\n- **Cohere Command A** (March 13): Cohere released Command A, a 111B parameter enterprise-focused model delivering maximum performance on just two GPUs, matching GPT-4o and DeepSeek-V3 on agentic tasks. [^4][^7]\n- **Mistral Small 3.1** (March 17): Mistral released Mistral Small 3.1, adding multimodal (image) capabilities and 128K context length while still fitting on a single RTX 4090. [^9][^19]\n- **Baidu ERNIE 4.5 & X1** (March 16): Baidu released ERNIE 4.5 foundation model and ERNIE X1 reasoning model, with X1 performing comparably to DeepSeek R1 at half the price. [^10]\n- **Gemini 2.5 Pro** (March 25): Google released Gemini 2.5 Pro Experimental, its most intelligent model yet, topping the LMArena leaderboard with strong reasoning and coding capabilities. [^2][^3]\n- **GPT-4o Image Generation** (March 25): OpenAI launched native image generation capabilities in GPT-4o, enabling photorealistic output, image transformation, and reliable text rendering in images. \n- **DeepSeek-V3-0324** (March 25): DeepSeek released an updated V3 model with improved performance, now under MIT license. [^12]\n- **Runway Gen-4** (March 31): Runway released Gen-4, enabling consistent character, location, and object generation across scenes with coherent world environments. [^15][^20]\n\n## April 2025\n\n- **Midjourney V7** (April 3-4): Midjourney released V7, its first new AI image model in nearly a year, featuring improved text and image prompt handling with richer textures and more coherent details. [^1]\n- **Llama 4** (April 5): Meta released Llama 4 (Scout, Maverick, and Behemoth variants), natively multimodal models trained on text, image, and video data with up to 1 million token context length. [^1][^21]\n- **GPT-4.1** (April 14): OpenAI released GPT-4.1 family (including mini and nano variants) with major gains in coding, instruction following, and long-context understanding up to 1M tokens. \n- **OpenAI o3 & o4-mini** (April 16): OpenAI released o3 and o4-mini, the smartest reasoning models to date, representing a step change in ChatGPT's capabilities for complex tasks. [^16][^19]\n- **Adobe Firefly Image 4 & Video Model** (April 24): Adobe unveiled Firefly Image Model 4 with unmatched definition and realism, plus the Firefly Video Model for dynamic, commercially safe video creation. [^1]\n- **Qwen3** (April 28-29): Alibaba released Qwen3, featuring eight models from 0.6B to 235B parameters under Apache 2.0 license, with the flagship MoE model activating 22B of 235B parameters. [^11][^22]\n\n## May 2025\n\n- **AlphaEvolve** (May 14): Google DeepMind announced AlphaEvolve, a Gemini-powered coding agent for designing advanced algorithms. [^1]\n- **GPT-4.1 in ChatGPT** (May 14): OpenAI made GPT-4.1 available directly in ChatGPT for Plus, Pro, and Team users, excelling at coding and instruction following. [^9]\n- **OpenAI Codex** (May 16): OpenAI introduced Codex, a cloud-based coding agent that can navigate repositories, edit files, run commands, and execute tests from a prompt or spec. [^28]\n- **Veo 3** (May 20): Google DeepMind released Veo 3, a state-of-the-art video generation model that also creates synchronized audio including dialogue, sound effects, and ambient noise. [^2][^31]\n- **Google Flow** (May 20): Google launched Flow, an AI filmmaking tool built for creatives, powered by Veo, Imagen, and Gemini for creating cinematic clips and scenes. [^2][^18]\n- **Imagen 4** (May 20): Google released Imagen 4 at I/O 2025, featuring remarkable clarity in fine details, excellent text rendering, and both photorealistic and abstract style capabilities. [^1]\n- **Gemma 3n Preview** (May 20): Google announced Gemma 3n preview, a mobile-first architecture bringing powerful multimodal AI to edge devices with text, image, video, and audio capabilities. [^1]\n- **Claude 4** (May 22): Anthropic released Claude 4 with two models (Opus 4 and Sonnet 4), setting new standards for coding, advanced reasoning, and AI agents, with Opus 4 capable of sustained performance on complex, long-running tasks. [^2][^3]\n- **Mistral Medium 3** (May 2025): Mistral released Mistral Medium 3, a frontier-class multimodal model for enterprise use. [^18]\n\n## June 2025\n\n- **Mistral Small 3.2** (June 20): Mistral released Mistral Small 3.2, a minor update to Small 3.1 with continued improvements for local deployment. [^1]\n- **Gemma 3n Full Release** (June 26): Google officially released Gemma 3n, a next-gen multimodal AI model designed to run on smartphones and edge devices with 2-4B effective parameters while supporting text, image, video, and audio. [^1][^21]\n- **Imagen 4 API** (June 25): Google made Imagen 4 and Imagen 4 Ultra available in the Gemini API and Google AI Studio with significantly improved text rendering. [^1]\n\n## July 2025\n\n- **Baidu ERNIE 4.5 Open Source** (July 1): Baidu open-sourced the ERNIE 4.5 series with ten model variants ranging from 0.3B to 424B parameters. [^13]\n- **Grok 4** (July 9): xAI released Grok 4, with Grok 4 Heavy available for SuperGrok Heavy subscribers for more challenging tasks. [^13][^24]\n- **Adobe Firefly Video Enhancements** (July 17): Adobe introduced enhanced Firefly Video Model with improved motion fidelity and advanced video controls. [^1]\n- **ChatGPT Agent Mode** (July 17): OpenAI integrated Operator into ChatGPT as \"agent mode,\" combining deep research with visual browser capabilities. [^15][^24]\n- **GLM-4.5** (July 29): Zhipu AI released GLM-4.5, a 355B parameter model (335B and 106B sizes) under MIT license with 22T token training corpus. [^32]\n\n## August 2025\n\n- **GPT-OSS** (August 5): OpenAI released GPT-OSS, its first open-source models in 20B and 120B sizes under Apache 2.0 license. [^23]\n- **Claude 4.1** (August 5): Anthropic released Claude 4.1 Opus with enhanced capabilities. [^17]\n- **GPT-5** (August 7): OpenAI released GPT-5 in three variants (GPT-5, GPT-5 mini, GPT-5 nano) with integrated thinking abilities, available in ChatGPT and API. [^10][^29]\n- **Gemma 3 270M** (August 14): Google released Gemma 3 in a compact 270M size for hyper-efficient deployment. [^30]\n- **Imagen 4 Fast GA** (August 15): Google made Imagen 4 Fast and the full Imagen 4 family generally available in the Gemini API. [^1]\n- **DeepSeek-V3.1** (August 21): DeepSeek released V3.1, a 671B parameter hybrid model that can switch between thinking and non-thinking modes, trained on 15.6T tokens. [^29][^34]\n- **Mistral Medium 3.1** (August 2025): Mistral released Mistral Medium 3.1, a frontier-class multimodal model. [^13]\n- **Nano Banana (Gemini 2.5 Flash Image)** (August 26): Google launched Nano Banana, a state-of-the-art image generation and editing model integrated into the Gemini app. [^16][^33]\n- **YandexGPT 5.1 Pro** (August 28): Yandex released YandexGPT 5.1 Pro for the Alice chatbot. [^11]\n\n## September 2025\n\n- **Apertus** (September 2): ETH Zurich and EPF Lausanne released Apertus, a 70B parameter model claimed to be the first LLM compliant with the EU's AI Act, under Apache 2.0 license. [^2]\n- **OpenAI Codex Upgrades** (September 15): OpenAI released major upgrades to Codex, making it faster, more reliable, and better at real-time collaboration. [^20]\n- **Qwen3-Omni** (September 22): Alibaba released Qwen3-Omni, a multimodal model processing text, images, audio, and video with real-time streaming responses in text and natural speech. [^11]\n- **Claude 4.5 Sonnet** (September 29): Anthropic released Claude 4.5 Sonnet with enhanced capabilities. [^6]\n- **Claude Code 2.0 & Claude Agent SDK** (September 29): Anthropic released Claude Code 2.0 with checkpoints, native VS Code extension, and the renamed Claude Agent SDK for building complex AI agents. [^7][^26]\n- **DeepSeek-V3.2-Exp** (September 29): DeepSeek released an experimental V3.2 model with DeepSeek Sparse Attention (DSA) mechanism. [^35]\n- **GLM-4.6** (September 30): Zhipu AI released GLM-4.6, a 357B parameter model under Apache 2.0 license. [^22]\n- **Sora 2** (September 30): OpenAI released Sora 2, its flagship video and audio generation model with advanced world simulation capabilities, improved physics, instruction-following, and synchronized audio. [^3][^10]\n\n## October 2025\n\n- **Veo 3.1** (October 15): Google released Veo 3.1 with Scene Extension, First-Last Frame transitions, and Reference to Video features for longer, more coherent cinematic sequences. [^1][^6]\n- **Anthropic Agent Skills** (October 16): Anthropic introduced Agent Skills for extending Claude's capabilities with task-specific workflows. [^2]\n- **ChatGPT Atlas Browser** (October 21): OpenAI introduced the Atlas browser feature in ChatGPT. [^2]\n- **Adobe Firefly Image 5** (October 28): Adobe released Firefly Image Model 5 for photorealism and prompt-based editing, along with new partner models from ElevenLabs, Google, and Topaz Labs. [^1]\n- **Alice AI LLM 1.0** (October 28): Yandex released Alice AI LLM 1.0 for the Alice AI chatbot. [^1]\n\n## November 2025\n\n- **ERNIE-4.5-VL-28B-A3B-Thinking** (November 11): Baidu released a multimodal reasoning model achieving SOTA performance while activating only 3B parameters. [^12]\n- **ERNIE 5.0** (November 13): Baidu unveiled ERNIE 5.0, a natively omni-modal foundation model jointly modeling text, images, audio, and video at Baidu World 2025. [^3][^29]\n- **Nano Banana Pro** (November 20): Google released Nano Banana Pro, built on Gemini 3 Pro, offering unprecedented image generation accuracy with legible text in multiple languages. [^8][^24]\n- **Claude Opus 4.5** (November 2025): Anthropic released Claude Opus 4.5, described as the best model in the world for coding, agents, and computer use. [^4][^22]\n\n## December 2025\n\n- **DeepSeek-V3.2** (December 1): DeepSeek released V3.2 with further enhanced code generation capabilities, achieving 89% on HumanEval and 41% on LiveCodeBench. [^3][^25]\n- **Runway Gen-4.5** (December 1): Runway released Gen-4.5, pushing the frontier of video generation with unprecedented visual fidelity and creative control. [^16]\n- **Mistral 3 (Large 3, Ministral 3)** (December 2): Mistral released the Mistral 3 family including Large 3 (675B total, 41B active parameters MoE) and Ministral 3 models with 256K context and multimodal capabilities. [^3][^17]\n- **Anthropic MCP Donation** (December 9): Anthropic donated the Model Context Protocol to the newly established Agentic AI Foundation for open governance. [^12]\n- **Devstral 2** (December 9): Mistral released Devstral 2, a next-generation coding model family in 123B size, alongside the Mistral Vibe CLI. [^26]\n- **GPT-5.2** (December 11): OpenAI released GPT-5.2, capable of solving open problems in statistical learning theory that had remained unresolved by human researchers. [^5][^26]\n- **ChatGPT Images Upgrade** (December 16): OpenAI released an upgraded ChatGPT Images model (GPT-Image-1.5) with more precise edits, consistent details, and 4x faster generation. [^5]\n- **Mistral OCR 3** (December 17): Mistral released Mistral OCR 3, achieving new frontiers in document processing accuracy and efficiency. [^2]\n- **Gemini 3 Flash** (December 17): Google released Gemini 3 Flash, offering Pro-grade reasoning at Flash-level speed, becoming the default model in the Gemini app and AI Mode in Search. [^23][^25]\n- **Seed1.8** (December 18): ByteDance released Seed1.8, a model designed for generalized real-world agency supporting text and image inputs. [^2]\n\n---\n\n*Note: This timeline captures major publicly announced releases. Some dates reflect initial announcements or previews rather than general availability. Minor updates and regional rollouts are not exhaustively listed.*","profile":{"id":null,"name":null,"accessible":true,"subtitle":"Experimental research engine, expensive","model":"ki_deep_research","deprecate":false,"retired":false,"successor_model":null,"model_name":"Research (Experimental)","model_provider":"kagi","model_input_limit":1000000,"recommended":false,"model_info":"\n\n\n\n<div class=\"heading\">\n <div class=\"left\">\n \n \n \n <svg class=\"model-icon icon-xs\" data-model=\"ki_deep_research\">\n <use href=\"#icon-kagi\"/>\n </svg>\n <h3>\n \n <span class=\"model-provider\">Kagi</span>\n \n <span class=\"model-name\">none</span>\n </h3>\n </div>\n\n \n <a href=\"/settings?p=custom_assistant&id=none\">\n <svg class=\"icon-sm edit-icon\" aria-hidden=\"true\">\n <use href=\"#icon-pencil\" />\n </svg>\n </a>\n \n</div>\n\n\n<hr />\n<ul>\n \n <li>\n <h4>\n Cost\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Expensive\">\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-cost --high\"></div>\n \n \n </div>\n</span>\n </li>\n <li>\n <h4>\n Quality\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Rated 5 out of 5\">\n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n </div>\n</span>\n </li>\n <li>\n <h4>\n Speed\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Rated 2 out of 5\">\n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n </div>\n</span>\n </li>\n <li>\n <h4>\n Privacy\n </h4>\n <span>\n \n \n \n\n \n \n \n\n <div class=\"score\" aria-label=\"Rated 4 out of 5\">\n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"filled-bubble\"></div>\n \n \n \n \n <div aria-hidden=\"true\" class=\"unfilled-bubble\"></div>\n \n \n </div>\n</span>\n </li>\n <hr />\n <li>\n <h4>Last updated</h4>\n <span class=\"date\">\n \n \n 2025-12-09\n \n \n </span>\n </li>\n \n</ul>\n\n\n","model_provider_name":"Kagi","internet_access":true,"personalizations":true,"shortcut":null,"is_default_profile":false},"metadata":"<li>\n <span class=\"attribute\">Model</span>\n <span class=\"value\"><i class=\"icon-xs mr-4\"><svg><use href=\"#icon-kagi\"></use></svg></i>Research (Experimental)</span>\n</li>\n<li>\n <span class=\"attribute\">Version</span>\n <span class=\"value\">ki_deep_research-2025-12-09</span>\n</li>\n<li>\n <span class=\"attribute\">Speed (tok/s)</span>\n <span class=\"value\">39</span>\n</li>\n<li>\n <span class=\"attribute\">Tokens</span>\n <span class=\"value\">94,035</span>\n</li>\n<li>\n <span class=\"attribute\">Cost / Total ($)</span>\n <span class=\"value\">0.85 / 0.85</span>\n</li>\n<li>\n <span class=\"attribute\">End to end time (s)</span>\n <span class=\"value\">255.37</span>\n</li>\n<li>\n <span class=\"attribute\">Submitted</span>\n <span class=\"value\">2025-12-31 07:24</span>\n</li>\n\n","documents":[]}]