{"id":7282,"date":"2023-10-28T18:05:42","date_gmt":"2023-10-28T16:05:42","guid":{"rendered":"https:\/\/toukiela.com\/ia-titans-accordent-une-minuscule-attention-aux-chercheurs-en-securite-ia-vous-ne-devinerez-jamais-a-quel-point\/"},"modified":"2023-10-28T18:05:43","modified_gmt":"2023-10-28T16:05:43","slug":"ia-titans-accordent-une-minuscule-attention-aux-chercheurs-en-securite-ia-vous-ne-devinerez-jamais-a-quel-point","status":"publish","type":"post","link":"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/","title":{"rendered":"IA Titans pay (tiny) attention to AI security researchers: you'll never guess how much!"},"content":{"rendered":"<p><em><\/em><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Contents<\/p>\n<label for=\"ez-toc-cssicon-toggle-item-69f236e2a87d8\" class=\"ez-toc-cssicon-toggle-label\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewbox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewbox=\"0 0 24 24\" version=\"1.2\" baseprofile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/label><input type=\"checkbox\"  id=\"ez-toc-cssicon-toggle-item-69f236e2a87d8\"  aria-label=\"Toggle\" \/><nav><ul class='ez-toc-list ez-toc-list-level-1' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Un_nouveau_fonds_de_recherche_sur_levaluation_des_modeles_dIA\" >A new research fund for the evaluation of AI models<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Un_soutien_financier_pour_les_chercheurs\" >Financial support for researchers<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Administration_du_fonds\" >Fund administration<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Un_appel_a_contributions\" >A call for contributions<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Comparaison_avec_les_investissements_commerciaux\" >Comparison with commercial investments<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Comparaison_avec_dautres_subventions_de_securite_de_lIA\" >Comparison with other AI safety grants<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Le_cout_eleve_de_la_recherche_sur_la_securite_de_lIA\" >The high cost of AI safety research<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/toukiela.com\/en\/ia-titans-give-a-minuscule-attention-to-safety-researchers-ia-you-will-never-know-at-which-point\/#Un_potentiel_fonds_plus_important_a_lavenir\" >Greater fund potential in the future<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Un_nouveau_fonds_de_recherche_sur_levaluation_des_modeles_dIA\"><\/span>A new research fund for the evaluation of AI models<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The Frontier Model Forum, an industry organization focused on the study of \"frontier\" AI models such as GPT-4 and ChatGPT, announced today that it has committed $10 million to a new fund to advance research into tools for evaluating the most successful AI models.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Un_soutien_financier_pour_les_chercheurs\"><\/span>Financial support for researchers<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The fund, according to the Frontier Model Forum - whose members include Anthropic, Google, Microsoft and OpenAI - will support researchers affiliated with academic institutions, research institutions and startups. Initial funding will come from both the Frontier Model Forum and its philanthropic partners, the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, former Google CEO Eric Schmidt and Estonian billionaire Jaan Tallinn.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Administration_du_fonds\"><\/span>Fund administration<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The fund will be administered by the Meridian Institute, a non-profit organization based in Washington, D.C., which will issue a call for proposals \"in the coming months\", according to Frontier Model Forum. The Institute's work will be supported by an advisory board of external experts, AI company experts and \"individuals with experience in project financing\", the Frontier Model Forum added, without specifying who exactly these experts and individuals are or the size of the advisory board in question.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Un_appel_a_contributions\"><\/span>A call for contributions<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>\"We expect additional contributions from other partners,\" says a press release published by the Frontier Model Forum on several official blogs. \"The main objective of the fund will be to support the development of new assessments and modeling techniques... to develop and test assessment techniques for the potentially dangerous capabilities of frontier systems.\"<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Comparaison_avec_les_investissements_commerciaux\"><\/span>Comparison with commercial investments<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Indeed, $10 million is not insignificant. (More precisely, it's $10 million in pledges - the David and Lucile Packard Foundation has yet to officially commit funding). But in the context of AI safety research, that seems rather, well, conservative - at least compared to what Frontier Model Forum members have spent on their commercial activities.<\/p>\n<p>This year alone, Anthropic raised billions of dollars from Amazon to develop a next-generation AI assistant, following a $300 million investment from Google. Microsoft has pledged $10 billion to OpenAI, and OpenAI - with annual sales well over $1 billion - is reportedly in talks to sell shares, taking its valuation to nearly $90 billion.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Comparaison_avec_dautres_subventions_de_securite_de_lIA\"><\/span>Comparison with other AI safety grants<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The fund is also small compared to other AI safety grants. Open Philanthropy, the fundraising and research foundation co-founded by Facebook founder Dustin Moskovitz, has donated around $307 million to AI safety, according to an analysis on the Less Wrong blog. The Survival and Flourishing Fund, a for-profit company funded primarily by Tallinn, has donated around $30 million to AI safety projects. And the US National Science Foundation has said it will spend $20 million on AI safety research over the next two years, supported by grants from Open Philanthropy.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Le_cout_eleve_de_la_recherche_sur_la_securite_de_lIA\"><\/span>The high cost of AI safety research<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>AI security researchers won't necessarily train GPT-4-level models from scratch. But even smaller, less capable models they might wish to test would be expensive to develop with current equipment, with costs ranging from hundreds of thousands of dollars to millions. This doesn't take into account other overheads, such as researchers' salaries. (Data scientists don't come cheap.)<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Un_potentiel_fonds_plus_important_a_lavenir\"><\/span>Greater fund potential in the future<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The Frontier Model Forum hints at a larger fund in the future. If this materializes, it could stand a chance of advancing AI safety research - provided the for-profit fund's promoters refrain from exerting undue influence on the research. But in any case, the initial tranche seems far too limited to accomplish much.<\/p>","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":7284,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[608],"tags":[],"class_list":["post-7282","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-actualite-intelligence-artificielle","generate-columns","tablet-grid-50","mobile-grid-100","grid-parent","grid-50"],"_links":{"self":[{"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/posts\/7282","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/comments?post=7282"}],"version-history":[{"count":1,"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/posts\/7282\/revisions"}],"predecessor-version":[{"id":7283,"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/posts\/7282\/revisions\/7283"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/media\/7284"}],"wp:attachment":[{"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/media?parent=7282"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/categories?post=7282"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/toukiela.com\/en\/wp-json\/wp\/v2\/tags?post=7282"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}