[{"data":1,"prerenderedAt":205},["ShallowReactive",2],{"DlFXI4Eibt_Bn9lrEZz1TYbHCWFZj3IvqwHQSEW-Exc":3,"waVgPP3D6jJZ6xk09k-yyZPetjaX47X8GL65bOVrtTg":194},{"code":4,"msg":5,"data":6},0,"",{"category":7,"tag":11,"hot":39,"new":78,"banner":118,"data":143,"cache":193},[8,9,10],"Agent","OpenAI","LLM",[12,14,17,20,23,25,27,30,33,36],{"title":8,"total":13},39,{"title":15,"total":16},"Google",44,{"title":18,"total":19},"Nvidia",13,{"title":21,"total":22},"Claude",11,{"title":9,"total":24},35,{"title":10,"total":26},85,{"title":28,"total":29},"DeepSeek",9,{"title":31,"total":32},"OCR",1,{"title":34,"total":35},"Chat",7,{"title":37,"total":38},"Generator",116,[40,48,55,64,71],{"id":41,"publish_date":42,"is_original":4,"collection":5,"cover_url":43,"cover_url_1_1":44,"title":45,"summary":46,"author":47},557,"2022-04-29","article_res/cover/7a9b1375ed9bb298154981bae42b794d.jpeg","article_res/cover/afa281dd52bc0454e6735daa8e6b0706.jpeg","Translation and summary of Messari Report [2.8 Kristin Smith, Blockchain Association and Katie Haun, a16z]","We need unity and speed right now.","Translation",{"id":49,"publish_date":50,"is_original":4,"collection":5,"cover_url":51,"cover_url_1_1":52,"title":53,"summary":54,"author":47},531,"2022-05-25","article_res/cover/e8362057f8fa189594c60afdfaaeb6e5.jpeg","article_res/cover/8ea08d0d6fa7eee6b57ed4ec61b61ad6.jpeg","Decentralized Society: Finding Web3’s Soul / Decentralized Society: Finding the Soul of Web3 -7","Decentralization through Pluralism When analyzing ecosystems, it's desirable to measure how decentralized it is.",{"id":56,"publish_date":57,"is_original":32,"collection":58,"cover_url":59,"cover_url_1_1":60,"title":61,"summary":62,"author":63},127,"2024-11-14","#Google #AI Game #World Model #AI Story","article_res/cover/0233a875b7ec2debf59779e311547569.jpeg","article_res/cover/6ffddb6ae4914b3c699493311aa9f198.jpeg","Google Launches \"Unbounded\": A Generative Infinite Character Life Simulation Game","Unbounded: A Generative Infinite Game of Character Life Simulation","Renee's Entrepreneurial Journey",{"id":13,"publish_date":65,"is_original":32,"collection":66,"cover_url":67,"cover_url_1_1":68,"title":69,"summary":70,"author":63},"2025-02-14","#Deep Dive into LLMs #Andrej Karpathy #LLM #Tool Use #Hallucination","article_res/cover/11e858ad6b74dfa80f923d549b62855c.jpeg","article_res/cover/615e1b320f1fc163edc1d2d154a6de33.jpeg","Andrej Karpathy's in-depth explanation of LLM (Part 4): Hallucinations","hallucinations, tool use, knowledge/working memory",{"id":72,"publish_date":73,"is_original":4,"collection":5,"cover_url":74,"cover_url_1_1":75,"title":76,"summary":77,"author":47},579,"2022-04-07","article_res/cover/39387376ba28447af1eb40576b9df215.jpeg","article_res/cover/02727ede8551ed49901d0abe6d6305b7.jpeg","Messari Report Translation and Summary 【1-7 Surviving the Winter】","I’d be more cautious here: 10 year and 10 hour thinking only.",[79,87,95,103,111],{"id":80,"publish_date":81,"is_original":32,"collection":82,"cover_url":83,"cover_url_1_1":84,"title":85,"summary":86,"author":63},627,"2025-03-20","#AI Avatar #AI Video Generation","article_res/cover/d95481358f73924989f8c4ee9c75d1c8.jpeg","article_res/cover/b74bc0fab01f8b6a6aa87696c0c3ed8b.jpeg","DisPose: Generating Animated Videos by Driving Video with Reference Images","DisPose is a controllable human image animation method that enhances video generation.",{"id":88,"publish_date":89,"is_original":32,"collection":90,"cover_url":91,"cover_url_1_1":92,"title":93,"summary":94,"author":63},626,"2025-03-21","#Deep Dive into LLMs #LLM #RL #Andrej Karpathy #AlphaGo","article_res/cover/446553a5c8f8f2f07d97b20eaee84e56.jpeg","article_res/cover/e6c2823409c9b34624064b9acbaca6f1.jpeg","AlphaGo and the Power of Reinforcement Learning - Andrej Karpathy's Deep Dive on LLMs (Part 9)","Simply learning from humans will never surpass human capabilities.",{"id":96,"publish_date":97,"is_original":32,"collection":98,"cover_url":99,"cover_url_1_1":100,"title":101,"summary":102,"author":63},625,"2025-03-22","#Deep Dive into LLMs #LLM #RL #RLHF #Andrej Karpathy","article_res/cover/8da81d38b1e5cf558a164710fd8a5389.jpeg","article_res/cover/96f028d76c362a99a0dd56389e8f7a9b.jpeg","Reinforcement Learning from Human Feedback (RLHF) - Andrej Karpathy's Deep Dive on LLMs (Part 10)","Fine-Tuning Language Models from Human Preferences",{"id":104,"publish_date":105,"is_original":32,"collection":106,"cover_url":107,"cover_url_1_1":108,"title":109,"summary":110,"author":63},624,"2025-03-23","#Deep Dive into LLMs #LLM #Andrej Karpathy #AI Agent #MMM","article_res/cover/a5e7c3d48bb09109684d6513287c661d.jpeg","article_res/cover/d3f22b7c0ab8d82fd2da457a299e0773.jpeg","The Future of Large Language Models - Andrej Karpathy's In-Depth Explanation of LLM (Part 11)","preview of things to come",{"id":112,"publish_date":105,"is_original":32,"collection":113,"cover_url":114,"cover_url_1_1":115,"title":116,"summary":117,"author":63},623,"#Google #Voe #AI Video Generation","article_res/cover/c44062fea0f336c2b96b3928292392c2.jpeg","article_res/cover/a041041c69092ad3db191c5bf3ff981b.jpeg","Trial of Google's video generation model VOE2","Our state-of-the-art video generation model",[119,127,135],{"id":120,"publish_date":121,"is_original":32,"collection":122,"cover_url":123,"cover_url_1_1":124,"title":125,"summary":126,"author":63},160,"2024-10-04","#Philosophy","article_res/cover/496990c49211e8b7f996b7d39c18168e.jpeg","article_res/cover/14dbaa1ade9cb4316d5829423a900362.jpeg","Time","The fungus of the morning does not know the waxing and waning of the moon, and the cicada does not know the seasons; this is a short life. To the south of the state of Chu there is a dark spirit which regards five hundred years as spring and five hundred years as autumn. In ancient times there was a great tree called the Ming which regarded eight thousand years as spring and eight thousand years as autumn; this is a long life.",{"id":128,"publish_date":129,"is_original":32,"collection":130,"cover_url":131,"cover_url_1_1":132,"title":133,"summary":134,"author":63},98,"2024-12-17","#AI Video Generator #Sora #Pika","article_res/cover/3b86e85d03fff4f356a3e4cf2bb329c9.jpeg","article_res/cover/5fa5c20ad0b40f8f544d257c0ef02938.jpeg","Pika 2.0 video generation officially released: effect comparison with Sora","今天，我们推出了Pika 2.0模型。卓越的文字对齐效果。惊人的视觉表现。还有✨场景成分✨",{"id":136,"publish_date":137,"is_original":32,"collection":138,"cover_url":139,"cover_url_1_1":140,"title":141,"summary":142,"author":63},71,"2025-01-14","#Nvidia #World Foundation Model #Cosmos #Physical AI #Embodied AI","article_res/cover/feddf8c832dfb45d28804291f6a42a9e.jpeg","article_res/cover/d6bc2f1186d96b78228c2283a17a3645.jpeg","NVIDIA's Cosmos World Model","Cosmos World Foundation Model Platform for Physical AI",[144,163,188],{"title":8,"items":145},[146,147,155],{"id":104,"publish_date":105,"is_original":32,"collection":106,"cover_url":107,"cover_url_1_1":108,"title":109,"summary":110,"author":63},{"id":148,"publish_date":149,"is_original":32,"collection":150,"cover_url":151,"cover_url_1_1":152,"title":153,"summary":154,"author":63},622,"2025-03-24","#OWL #AI Agent #MAS #MCP #CUA","article_res/cover/cb50ca7f2bf4d1ed50202d7406e1c19a.jpeg","article_res/cover/4aa7aa3badfacf3cc84121334f1050dd.jpeg","OWL: Multi-agent collaboration","OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation",{"id":156,"publish_date":157,"is_original":32,"collection":158,"cover_url":159,"cover_url_1_1":160,"title":161,"summary":162,"author":63},620,"2025-03-26","#LLM #Google #Gemini #AI Agent","article_res/cover/53751a6dbbe990b1eb0b63f3b062aed4.jpeg","article_res/cover/031344981f0a212ff82d1f3a64aa5756.jpeg","Gemini 2.5 Pro, claimed to be far ahead of the competition, has been released with great fanfare: comprehensively surpassing other LLMs and topping the global rankings","Gemini 2.5: Our most intelligent AI model",{"title":9,"items":164},[165,172,180],{"id":166,"publish_date":157,"is_original":32,"collection":167,"cover_url":168,"cover_url_1_1":169,"title":170,"summary":171,"author":63},619,"#OpenAI #AI Image Generator #4o #MMM #AR Transformer","article_res/cover/2faffc97fcecf3151552cb0fd3206d89.jpeg","article_res/cover/1133cb4948af44cee2e7fbe79efb69e5.jpeg","The native image function of GPT-4o is officially launched","Introducing 4o Image Generation",{"id":173,"publish_date":174,"is_original":4,"collection":175,"cover_url":176,"cover_url_1_1":177,"title":178,"summary":179,"author":63},434,"2023-07-15","#Anthropic #OpenAI #Google #AI Code Generator #Claude","article_res/cover/e1b6f600a2b9f262a4392684e5f2ce25.jpeg","article_res/cover/6e1772e83f78f9a351ab23d3e414adee.jpeg","Latest Updates on Google Bard /Anthropic Claude2 / ChatGPT Code Interpreter","We want our models to use their programming skills to provide more natural interfaces to the basic functions of our computers.  \n - OpenAI",{"id":181,"publish_date":182,"is_original":4,"collection":183,"cover_url":184,"cover_url_1_1":185,"title":186,"summary":187,"author":63},417,"2023-08-24","#OpenAI","article_res/cover/bccf897d50a88b18364e35f7466387e0.jpeg","article_res/cover/2f871085c1073717c1703ae86e18056f.jpeg","The GPT-3.5 Turbo fine-tuning (fine-tuning function) has been released～","Developers can now bring their own data to customize GPT-3.5 Turbo for their use cases.",{"title":10,"items":189},[190,191,192],{"id":88,"publish_date":89,"is_original":32,"collection":90,"cover_url":91,"cover_url_1_1":92,"title":93,"summary":94,"author":63},{"id":96,"publish_date":97,"is_original":32,"collection":98,"cover_url":99,"cover_url_1_1":100,"title":101,"summary":102,"author":63},{"id":104,"publish_date":105,"is_original":32,"collection":106,"cover_url":107,"cover_url_1_1":108,"title":109,"summary":110,"author":63},true,{"code":4,"msg":5,"data":195},{"id":196,"publish_date":197,"is_original":32,"collection":198,"articles_id":199,"cover_url":200,"cover_url_1_1":201,"title":202,"summary":203,"author":63,"content":204},19,"2025-03-04","#GRPO #Deepseek #RL #PPO","-XHGkmjtbgLZbgZsluFJ3w","article_res/cover/21282be29e13c8b56c80b5a83646bb27.jpeg","article_res/cover/d78738a5736e0aa47fef68a3c1b4f08f.jpeg","GRPO (Group Relative Policy Optimization) Study Notes","We introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO)","\u003Cdiv class=\"rich_media_content js_underline_content\n                       autoTypeSetting24psection\n            \" id=\"js_content\">\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cspan leaf=\"\">I have previously taken a cursory look at the algorithm for aligning LLMs:\u003C/span>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cspan leaf=\"\">Today, as a non-tech person, I took a look at GRPO. GRPO is a reinforcement learning (RL) algorithm and a variant of Proximal Policy Optimization (PPO), designed to enhance the mathematical reasoning abilities of large language models (LLMs) while optimizing PPO's memory usage.\u003C/span>\u003C/p>\u003Csection style=\"text-align: center;\" nodeleaf=\"\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100010688\" data-ratio=\"0.5266666666666666\" data-s=\"300,640\" data-type=\"png\" data-w=\"900\" type=\"block\" src=\"https://res.cooltool.vip/article_res/assets/17423769874850.7077611957516898.png\">\u003C/section>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Core idea:\u003C/span>\u003C/strong>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cspan leaf=\"\">The core idea of GRPO is to abandon the traditional critic model and instead use the average reward of multiple generated results under the same problem as the baseline. This \"group-relative\" approach aligns with the comparative nature of reward models, as reward models are typically trained to compare different outputs for the same problem.\u003C/span>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Difference from PPO:\u003C/span>\u003C/strong>\u003C/p>\u003Csection style=\"text-align: center;\" nodeleaf=\"\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100010685\" data-ratio=\"0.45987654320987653\" data-s=\"300,640\" data-type=\"png\" data-w=\"972\" type=\"block\" src=\"https://res.cooltool.vip/article_res/assets/17423769875370.09167306225867189.png\">\u003C/section>\u003Cul style='box-sizing: border-box;margin: 8px 0px;;list-style-type: disc;padding: 0px 0px 0px 25px;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;' class=\"list-paddingleft-1\">\u003Cli style=\"box-sizing: border-box;;\">\u003Csection style=\"box-sizing: border-box;;margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;font-weight: normal;\">\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;\">\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Critic Model:\u003C/span>\u003C/strong>\u003Cspan leaf=\"\">PPO uses the critic model to estimate the state value function, while GRPO directly uses the average reward of multiple generated results under the same problem as a baseline, without the need to train an additional critic model, thereby reducing memory and computational burdens.\u003C/span>\u003C/p>\u003C/section>\u003C/li>\u003Cli style=\"box-sizing: border-box;;\">\u003Csection style=\"box-sizing: border-box;;margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;font-weight: normal;\">\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;\">\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">KL Divergence Regularization:\u003C/span>\u003C/strong>\u003Cspan leaf=\"\">PPO usually adds a KL divergence penalty term to the reward function to prevent the policy model from deviating too much from the reference model. GRPO, on the other hand, directly adds KL divergence to the loss function for regularization, avoiding the complication of advantage function calculations.\u003C/span>\u003C/p>\u003C/section>\u003C/li>\u003C/ul>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Formula Expression:\u003C/span>\u003C/strong>\u003C/p>\u003Csection style=\"text-align: center;\" nodeleaf=\"\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100010683\" data-ratio=\"0.09919261822376009\" data-s=\"300,640\" data-type=\"png\" data-w=\"867\" type=\"block\" src=\"https://res.cooltool.vip/article_res/assets/17423769874920.010885851679964809.png\">\u003C/section>\u003Csection>\u003Cspan style='color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;display: inline !important;float: none;'>\u003Cspan leaf=\"\">\u003Cspan textstyle=\"\" style=\"font-weight: bold;\">Among them:\u003C/span>\u003C/span>\u003C/span>\u003Cmjx-assistive-mml role=\"presentation\" unselectable=\"on\" display=\"inline\">\u003Cmo stretchy=\"false\">\u003Cspan leaf=\"\" data-mpa-action-id=\"m7sx83bzhjb\" style='color:rgba(0, 0, 0, 0.9);font-size:17px;font-family:\"mp-quote\", -apple-system-font, BlinkMacSystemFont, \"Helvetica Neue\", \"PingFang SC\", \"Hiragino Sans GB\", \"Microsoft YaHei UI\", \"Microsoft YaHei\", Arial, sans-serif;line-height:1.6;letter-spacing:0.034em;font-style:normal;font-weight:normal;'>\u003Cbr>\u003C/span>\u003C/mo>\u003C/mjx-assistive-mml>\u003C/section>\u003Csection style=\"text-align: center;\" nodeleaf=\"\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100010684\" data-ratio=\"0.6481481481481481\" data-s=\"300,640\" data-type=\"png\" data-w=\"1080\" type=\"block\" src=\"https://res.cooltool.vip/article_res/assets/17423769875020.4811177015296857.png\">\u003C/section>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Advantage function calculation:\u003C/span>\u003C/strong>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;' data-mpa-action-id=\"m7sxbsf51moz\">\u003Cspan leaf=\"\">GRPO uses group-wise relative rewards to calculate the advantage function, which aligns with the comparative nature of reward models. Specifically, for each question q, GRPO samples a set of outputs {o from the old policy model π\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxbbtq1ejz\">\u003Cspan leaf=\"\">θold\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">} . Then, the reward model is used to evaluate these outputs, resulting in a set of rewards r = {r\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxbeg3gid\">\u003Cspan leaf=\"\">1\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">, o\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxbg79au9\">\u003Cspan leaf=\"\">2\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">, ..., o\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxbizu28v\">\u003Cspan leaf=\"\">G\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">}. Subsequently, the advantage function is calculated based on these rewards within each group.\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxblny1692\">\u003Cspan leaf=\"\">1\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">, r\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxbnnx11cs\">\u003Cspan leaf=\"\">2\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">, ..., r\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxbse6229g\">\u003Cspan leaf=\"\">G\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">} Then, these rewards are normalized by subtracting the group mean and dividing by the group standard deviation.\u003C/span>\u003C/p>\u003Csection style=\"text-align: center;\" nodeleaf=\"\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100010686\" data-ratio=\"0.4606116774791474\" data-s=\"300,640\" data-type=\"png\" data-w=\"1079\" type=\"block\" src=\"https://res.cooltool.vip/article_res/assets/17423769889550.8895411466871319.png\">\u003C/section>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Two types of supervision:\u003C/span>\u003C/strong>\u003C/p>\u003Cul style='box-sizing: border-box;margin: 8px 0px;;list-style-type: disc;padding: 0px 0px 0px 25px;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;' class=\"list-paddingleft-1\">\u003Cli style=\"box-sizing: border-box;;\">\u003Csection style=\"box-sizing: border-box;;margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;font-weight: normal;\">\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;\" data-mpa-action-id=\"m7sxd1yn1vov\">\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Outcome Supervision:\u003C/span>\u003C/strong>\u003Cspan leaf=\"\">Only provide normalized rewards at the end of each output o\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxcx8i1r9m\">\u003Cspan leaf=\"\">i\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">and assign the advantage function A for all tokens in the output.\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxd1xz24t8\">\u003Cspan leaf=\"\">i,t\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">Set as normalized rewards.\u003C/span>\u003C/p>\u003C/section>\u003C/li>\u003Cli style=\"box-sizing: border-box;;\">\u003Csection style=\"box-sizing: border-box;;margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;font-weight: normal;\">\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;\" data-mpa-action-id=\"m7sxd9th506\">\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Process Supervision:\u003C/span>\u003C/strong>\u003Cspan leaf=\"\">Provide rewards at the end of each reasoning step. Given a question q and G sampled outputs {o\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxd5unvx2\">\u003Cspan leaf=\"\">1\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">, o\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxd7st642\">\u003Cspan leaf=\"\">2\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">, ..., o\u003C/span>\u003Csub leaf=\"\" mpa-font-style=\"m7sxd9sp19i\">\u003Cspan leaf=\"\">G\u003C/span>\u003C/sub>\u003Cspan leaf=\"\">}, use the process reward model to score each step of the outputs, thereby generating corresponding rewards. Then, normalize these rewards using the mean and standard deviation. Subsequently, process supervision calculates the advantage function for each token as the sum of the normalized rewards for subsequent steps.\u003C/span>\u003C/p>\u003C/section>\u003C/li>\u003C/ul>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">Iterative Reinforcement Learning:\u003C/span>\u003C/strong>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cspan leaf=\"\">As the reinforcement learning training process progresses, the old reward model may no longer be sufficient to supervise the current policy model. Therefore, GRPO also explores iterative reinforcement learning. In iterative GRPO, a new training set for the reward model is generated based on the sampling results of the policy model, and the old reward model is continuously trained using a replay mechanism that includes 10% historical data. Then, the reference model is set as the policy model, and the policy model is continuously trained using the new reward model.\u003C/span>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cstrong style=\"box-sizing: border-box;font-weight: bold;;color: rgb(0, 0, 0);background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0);width: auto;height: auto;margin: 0px;padding: 0px;border-style: none;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;\">\u003Cspan leaf=\"\">\u003Cspan textstyle=\"\" style=\"font-weight: bold;\">Summary\u003C/span>：\u003C/span>\u003C/strong>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cspan leaf=\"\">GRPO is an efficient and effective reinforcement learning algorithm that reduces memory and computational burdens by abandoning the critic model and using group-wise relative rewards. Experimental results show that GRPO can significantly improve the mathematical reasoning ability of LLMs, even when the SFT model has already reached a high level.\u003C/span>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cspan leaf=\"\">\u003Cspan textstyle=\"\" style=\"font-weight: bold;\">Finally:\u003C/span>\u003C/span>\u003C/p>\u003Cp style='box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;text-indent: 0px;padding: 8px 0px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;orphans: 2;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;background-color: rgb(255, 255, 255);text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cspan leaf=\"\">Let ChatGPT 4.5 interpret this in language that a five-year-old child can understand:\u003C/span>\u003C/p>\u003Cblockquote style='box-sizing: border-box;margin: 20px 0px;;padding: 10px 10px 10px 20px;border-style: none none none solid;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0.05);width: auto;height: auto;box-shadow: rgba(0, 0, 0, 0) 0px 0px 0px 0px;display: block;overflow: auto;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;font-weight: normal;\">\u003Cspan leaf=\"\">Imagine that you are playing a game, trying to improve a certain skill through different methods and observing which method works best. For example, if you want to build the tallest tower with building blocks, you might try different stacking methods to see which one allows the tower to be taller.\u003C/span>\u003C/p>\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;font-weight: normal;\">\u003Cspan leaf=\"\">In the field of computing, there is a method called \"reinforcement learning,\" where computers learn by trying different strategies to improve their ability to complete tasks. Traditionally, computers use a helper model called a critic to evaluate the quality of each action.\u003C/span>\u003C/p>\u003C/blockquote>\u003Cblockquote style='box-sizing: border-box;margin: 20px 0px;;padding: 10px 10px 10px 20px;border-style: none none none solid;border-width: 3px;border-color: rgba(0, 0, 0, 0.4);border-radius: 0px;background: none 0% 0% / auto no-repeat scroll padding-box border-box rgba(0, 0, 0, 0.05);width: auto;height: auto;box-shadow: rgba(0, 0, 0, 0) 0px 0px 0px 0px;display: block;overflow: auto;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;font-style: normal;font-variant-ligatures: normal;font-variant-caps: normal;font-weight: 400;letter-spacing: normal;orphans: 2;text-align: left;text-indent: 0px;text-transform: none;widows: 2;word-spacing: 0px;-webkit-text-stroke-width: 0px;white-space: normal;text-decoration-thickness: initial;text-decoration-style: initial;text-decoration-color: initial;'>\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;font-weight: normal;\">\u003Cspan leaf=\"\">However, a new approach has recently emerged called \"Group Relative Policy Optimization\" (GRPO). Unlike traditional methods, GRPO does not rely on the critic but instead allows the computer to generate a set of different actions in each state, similar to trying multiple ways to stack your building blocks at the same time. Then, the computer evaluates the results of these actions to determine which method works best. By comparing these different attempts, the computer can learn which strategies are more effective without expert guidance. (Renee's inner question: Is this similar to the growth method of a small-town problem solver who practices extensively without a tutor?)\u003C/span>\u003C/p>\u003Cp style=\"box-sizing: border-box;margin: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: 0em;text-align: left;text-indent: 0em;padding: 8px 0px;font-weight: normal;\">\u003Cspan leaf=\"\">This method enables computers to learn more efficiently by trying multiple options and observing which one works best, gradually improving their ability to complete tasks, just as you would when building a tower with building blocks.\u003C/span>\u003C/p>\u003C/blockquote>\u003Cp style=\"display: none;\">\u003Cmp-style-type data-value=\"3\">\u003C/mp-style-type>\u003C/p>\u003C/div>",1752585425075]