[{"data":1,"prerenderedAt":206},["ShallowReactive",2],{"DlFXI4Eibt_Bn9lrEZz1TYbHCWFZj3IvqwHQSEW-Exc":3,"G_mvCA43eK-Gp09JcKUnZWQaixccd2rquAnntzD6PdY":194},{"code":4,"msg":5,"data":6},0,"",{"category":7,"tag":11,"hot":39,"new":78,"banner":118,"data":143,"cache":193},[8,9,10],"Agent","OpenAI","LLM",[12,14,17,20,23,25,27,30,33,36],{"title":8,"total":13},39,{"title":15,"total":16},"Google",44,{"title":18,"total":19},"Nvidia",13,{"title":21,"total":22},"Claude",11,{"title":9,"total":24},35,{"title":10,"total":26},85,{"title":28,"total":29},"DeepSeek",9,{"title":31,"total":32},"OCR",1,{"title":34,"total":35},"Chat",7,{"title":37,"total":38},"Generator",116,[40,48,55,64,71],{"id":41,"publish_date":42,"is_original":4,"collection":5,"cover_url":43,"cover_url_1_1":44,"title":45,"summary":46,"author":47},557,"2022-04-29","article_res/cover/7a9b1375ed9bb298154981bae42b794d.jpeg","article_res/cover/afa281dd52bc0454e6735daa8e6b0706.jpeg","Translation and summary of Messari Report [2.8 Kristin Smith, Blockchain Association and Katie Haun, a16z]","We need unity and speed right now.","Translation",{"id":49,"publish_date":50,"is_original":4,"collection":5,"cover_url":51,"cover_url_1_1":52,"title":53,"summary":54,"author":47},531,"2022-05-25","article_res/cover/e8362057f8fa189594c60afdfaaeb6e5.jpeg","article_res/cover/8ea08d0d6fa7eee6b57ed4ec61b61ad6.jpeg","Decentralized Society: Finding Web3’s Soul / Decentralized Society: Finding the Soul of Web3 -7","Decentralization through Pluralism When analyzing ecosystems, it's desirable to measure how decentralized it is.",{"id":56,"publish_date":57,"is_original":32,"collection":58,"cover_url":59,"cover_url_1_1":60,"title":61,"summary":62,"author":63},127,"2024-11-14","#Google #AI Game #World Model #AI Story","article_res/cover/0233a875b7ec2debf59779e311547569.jpeg","article_res/cover/6ffddb6ae4914b3c699493311aa9f198.jpeg","Google Launches \"Unbounded\": A Generative Infinite Character Life Simulation Game","Unbounded: A Generative Infinite Game of Character Life Simulation","Renee's Entrepreneurial Journey",{"id":13,"publish_date":65,"is_original":32,"collection":66,"cover_url":67,"cover_url_1_1":68,"title":69,"summary":70,"author":63},"2025-02-14","#Deep Dive into LLMs #Andrej Karpathy #LLM #Tool Use #Hallucination","article_res/cover/11e858ad6b74dfa80f923d549b62855c.jpeg","article_res/cover/615e1b320f1fc163edc1d2d154a6de33.jpeg","Andrej Karpathy's in-depth explanation of LLM (Part 4): Hallucinations","hallucinations, tool use, knowledge/working memory",{"id":72,"publish_date":73,"is_original":4,"collection":5,"cover_url":74,"cover_url_1_1":75,"title":76,"summary":77,"author":47},579,"2022-04-07","article_res/cover/39387376ba28447af1eb40576b9df215.jpeg","article_res/cover/02727ede8551ed49901d0abe6d6305b7.jpeg","Messari Report Translation and Summary 【1-7 Surviving the Winter】","I’d be more cautious here: 10 year and 10 hour thinking only.",[79,87,95,103,111],{"id":80,"publish_date":81,"is_original":32,"collection":82,"cover_url":83,"cover_url_1_1":84,"title":85,"summary":86,"author":63},627,"2025-03-20","#AI Avatar #AI Video Generation","article_res/cover/d95481358f73924989f8c4ee9c75d1c8.jpeg","article_res/cover/b74bc0fab01f8b6a6aa87696c0c3ed8b.jpeg","DisPose: Generating Animated Videos by Driving Video with Reference Images","DisPose is a controllable human image animation method that enhances video generation.",{"id":88,"publish_date":89,"is_original":32,"collection":90,"cover_url":91,"cover_url_1_1":92,"title":93,"summary":94,"author":63},626,"2025-03-21","#Deep Dive into LLMs #LLM #RL #Andrej Karpathy #AlphaGo","article_res/cover/446553a5c8f8f2f07d97b20eaee84e56.jpeg","article_res/cover/e6c2823409c9b34624064b9acbaca6f1.jpeg","AlphaGo and the Power of Reinforcement Learning - Andrej Karpathy's Deep Dive on LLMs (Part 9)","Simply learning from humans will never surpass human capabilities.",{"id":96,"publish_date":97,"is_original":32,"collection":98,"cover_url":99,"cover_url_1_1":100,"title":101,"summary":102,"author":63},625,"2025-03-22","#Deep Dive into LLMs #LLM #RL #RLHF #Andrej Karpathy","article_res/cover/8da81d38b1e5cf558a164710fd8a5389.jpeg","article_res/cover/96f028d76c362a99a0dd56389e8f7a9b.jpeg","Reinforcement Learning from Human Feedback (RLHF) - Andrej Karpathy's Deep Dive on LLMs (Part 10)","Fine-Tuning Language Models from Human Preferences",{"id":104,"publish_date":105,"is_original":32,"collection":106,"cover_url":107,"cover_url_1_1":108,"title":109,"summary":110,"author":63},624,"2025-03-23","#Deep Dive into LLMs #LLM #Andrej Karpathy #AI Agent #MMM","article_res/cover/a5e7c3d48bb09109684d6513287c661d.jpeg","article_res/cover/d3f22b7c0ab8d82fd2da457a299e0773.jpeg","The Future of Large Language Models - Andrej Karpathy's In-Depth Explanation of LLM (Part 11)","preview of things to come",{"id":112,"publish_date":105,"is_original":32,"collection":113,"cover_url":114,"cover_url_1_1":115,"title":116,"summary":117,"author":63},623,"#Google #Voe #AI Video Generation","article_res/cover/c44062fea0f336c2b96b3928292392c2.jpeg","article_res/cover/a041041c69092ad3db191c5bf3ff981b.jpeg","Trial of Google's video generation model VOE2","Our state-of-the-art video generation model",[119,127,135],{"id":120,"publish_date":121,"is_original":32,"collection":122,"cover_url":123,"cover_url_1_1":124,"title":125,"summary":126,"author":63},160,"2024-10-04","#Philosophy","article_res/cover/496990c49211e8b7f996b7d39c18168e.jpeg","article_res/cover/14dbaa1ade9cb4316d5829423a900362.jpeg","Time","The fungus of the morning does not know the waxing and waning of the moon, and the cicada does not know the seasons; this is a short life. To the south of the state of Chu there is a dark spirit which regards five hundred years as spring and five hundred years as autumn. In ancient times there was a great tree called the Ming which regarded eight thousand years as spring and eight thousand years as autumn; this is a long life.",{"id":128,"publish_date":129,"is_original":32,"collection":130,"cover_url":131,"cover_url_1_1":132,"title":133,"summary":134,"author":63},98,"2024-12-17","#AI Video Generator #Sora #Pika","article_res/cover/3b86e85d03fff4f356a3e4cf2bb329c9.jpeg","article_res/cover/5fa5c20ad0b40f8f544d257c0ef02938.jpeg","Pika 2.0 video generation officially released: effect comparison with Sora","今天，我们推出了Pika 2.0模型。卓越的文字对齐效果。惊人的视觉表现。还有✨场景成分✨",{"id":136,"publish_date":137,"is_original":32,"collection":138,"cover_url":139,"cover_url_1_1":140,"title":141,"summary":142,"author":63},71,"2025-01-14","#Nvidia #World Foundation Model #Cosmos #Physical AI #Embodied AI","article_res/cover/feddf8c832dfb45d28804291f6a42a9e.jpeg","article_res/cover/d6bc2f1186d96b78228c2283a17a3645.jpeg","NVIDIA's Cosmos World Model","Cosmos World Foundation Model Platform for Physical AI",[144,163,188],{"title":8,"items":145},[146,147,155],{"id":104,"publish_date":105,"is_original":32,"collection":106,"cover_url":107,"cover_url_1_1":108,"title":109,"summary":110,"author":63},{"id":148,"publish_date":149,"is_original":32,"collection":150,"cover_url":151,"cover_url_1_1":152,"title":153,"summary":154,"author":63},622,"2025-03-24","#OWL #AI Agent #MAS #MCP #CUA","article_res/cover/cb50ca7f2bf4d1ed50202d7406e1c19a.jpeg","article_res/cover/4aa7aa3badfacf3cc84121334f1050dd.jpeg","OWL: Multi-agent collaboration","OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation",{"id":156,"publish_date":157,"is_original":32,"collection":158,"cover_url":159,"cover_url_1_1":160,"title":161,"summary":162,"author":63},620,"2025-03-26","#LLM #Google #Gemini #AI Agent","article_res/cover/53751a6dbbe990b1eb0b63f3b062aed4.jpeg","article_res/cover/031344981f0a212ff82d1f3a64aa5756.jpeg","Gemini 2.5 Pro, claimed to be far ahead of the competition, has been released with great fanfare: comprehensively surpassing other LLMs and topping the global rankings","Gemini 2.5: Our most intelligent AI model",{"title":9,"items":164},[165,172,180],{"id":166,"publish_date":157,"is_original":32,"collection":167,"cover_url":168,"cover_url_1_1":169,"title":170,"summary":171,"author":63},619,"#OpenAI #AI Image Generator #4o #MMM #AR Transformer","article_res/cover/2faffc97fcecf3151552cb0fd3206d89.jpeg","article_res/cover/1133cb4948af44cee2e7fbe79efb69e5.jpeg","The native image function of GPT-4o is officially launched","Introducing 4o Image Generation",{"id":173,"publish_date":174,"is_original":4,"collection":175,"cover_url":176,"cover_url_1_1":177,"title":178,"summary":179,"author":63},434,"2023-07-15","#Anthropic #OpenAI #Google #AI Code Generator #Claude","article_res/cover/e1b6f600a2b9f262a4392684e5f2ce25.jpeg","article_res/cover/6e1772e83f78f9a351ab23d3e414adee.jpeg","Latest Updates on Google Bard /Anthropic Claude2 / ChatGPT Code Interpreter","We want our models to use their programming skills to provide more natural interfaces to the basic functions of our computers.  \n - OpenAI",{"id":181,"publish_date":182,"is_original":4,"collection":183,"cover_url":184,"cover_url_1_1":185,"title":186,"summary":187,"author":63},417,"2023-08-24","#OpenAI","article_res/cover/bccf897d50a88b18364e35f7466387e0.jpeg","article_res/cover/2f871085c1073717c1703ae86e18056f.jpeg","The GPT-3.5 Turbo fine-tuning (fine-tuning function) has been released～","Developers can now bring their own data to customize GPT-3.5 Turbo for their use cases.",{"title":10,"items":189},[190,191,192],{"id":88,"publish_date":89,"is_original":32,"collection":90,"cover_url":91,"cover_url_1_1":92,"title":93,"summary":94,"author":63},{"id":96,"publish_date":97,"is_original":32,"collection":98,"cover_url":99,"cover_url_1_1":100,"title":101,"summary":102,"author":63},{"id":104,"publish_date":105,"is_original":32,"collection":106,"cover_url":107,"cover_url_1_1":108,"title":109,"summary":110,"author":63},true,{"code":4,"msg":5,"data":195},{"id":196,"publish_date":197,"is_original":4,"collection":198,"articles_id":199,"cover_url":200,"cover_url_1_1":201,"title":202,"summary":203,"author":204,"content":205},166,"2024-09-28","#History of Intelligence #Neuroscience","NjqTa89hkxh0a4N5RJGnZw","article_res/cover/f8369fb63aabb4d9022b46f4e4786223.jpeg","article_res/cover/a89cf18d49193ae495dffd23bd3623e7.jpeg","【A Brief History of Intelligence】3. Reinforcing (Vertebrates)","Curiosity and reinforcement learning coevolved because curiosity is a requirement for reinforcement learning to work.","Notes on \"A Brief History of Intelligence\"","\u003Cdiv class=\"rich_media_content js_underline_content\n                       autoTypeSetting24psection\n            \" id=\"js_content\">\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>540-485 million years ago, Earth entered the Cambrian period.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006792\" data-ratio=\"0.7494669509594882\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"938\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782180240.17570816829686953.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>At this time, vertebrates began to appear, and the brain structure of vertebrates had the same basic framework: forebrain, midbrain, and hindbrain. The forebrain further developed into the cortex/basal ganglia and thalamus/hypothalamus, beginning to show the prototype of subunits, hierarchy, and processing systems.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006793\" data-ratio=\"0.8009259259259259\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782180110.9787054612936379.jpeg\">\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006794\" data-ratio=\"0.5796296296296296\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782180100.27655454482853825.jpeg\">\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006795\" data-ratio=\"0.45092592592592595\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782180220.5333471544019643.jpeg\">\u003C/p>\u003Ch3 style='margin-top: 30px;margin-bottom: 15px;color: rgba(0, 0, 0, 0.85);;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);'>\u003Cspan style=\";font-size: 20px;color: rgb(0, 0, 0);line-height: 1.5em;letter-spacing: 0em;font-weight: bold;display: block;\">Reinforcement Learning and Curiosity\u003C/span>\u003C/h3>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Thorndike proved through his puzzle box experiment that cats can learn through trial and error, a learning method called reinforcement learning, and this ability only began to appear in vertebrates.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006796\" data-ratio=\"0.7235602094240837\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"955\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782180270.31373931974811353.jpeg\">\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006797\" data-ratio=\"0.31666666666666665\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782181770.7531759195981942.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Marvin Minsky designed an algorithm that mimics animal learning, called SNARC (Stochastic Neural Analog Reinforcement Calculator). It used 40 linked artificial neural networks, and every time the system successfully navigated out of a maze, it would reinforce the most recently activated synapses. However, the algorithm performed poorly because it was difficult to determine which step should be reinforced. Simply reinforcing the most recent action or all actions is not effective due to the lack of a reasonable mechanism for cross-time credit allocation.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006798\" data-ratio=\"0.875\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1056\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782183180.7737136813340284.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Richard Sutton proposed a new strategy to solve this problem: shifting from using actual rewards to expected rewards. This method learns through temporal differences (Temporal Difference, TD) in reward predictions at different times. Based on this principle, Tesauro developed a chess-playing system with significant success, validating the practicality of TD learning.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006799\" data-ratio=\"1.387037037037037\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782181770.3910502958178059.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>In 2018, Google DeepMind developed a new algorithm that successfully passed the first level of the game \"Montezuma's Revenge.\" This algorithm added \"curiosity\" to Sutton's TD learning, rewarding exploration of new behaviors. Similar to Skinner's box in operant conditioning, changing reward patterns have a greater effect on behavior reinforcement. Previously, our company invited Professor Wang Fei from Tsinghua University to lecture on psychology, where he mentioned that modern humans still retain parts of the \"primitive brain.\" Looking back now, many psychological phenomena can be traced back to the evolutionary history of the nervous system in ancient organisms. From the earlier radially symmetric animals' neurons, to the brain steering abilities of bilaterally symmetric animals, to the reinforcement learning capabilities of vertebrate brains, these have all evolved along such paths.\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Curiosity and reinforcement learning co-evolved because curiosity is a necessary condition for reinforcement learning. With the ability to recognize patterns, remember locations, and flexibly adjust behavior based on past rewards and punishments, the earliest vertebrates gained new opportunities: learning itself became an extremely valuable activity. The more patterns a vertebrate recognizes and the more locations it remembers, the greater its chances of survival. The more new things she tries, the more likely she is to discover accidental relationships between behavior and outcomes, thereby learning the correct response.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006814\" data-ratio=\"0.9444444444444444\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782190320.14547867407221937.jpeg\">\u003C/p>\u003Ch3 style='margin-top: 30px;margin-bottom: 15px;color: rgba(0, 0, 0, 0.85);;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);'>\u003Cspan style=\";font-size: 20px;color: rgb(0, 0, 0);line-height: 1.5em;letter-spacing: 0em;font-weight: bold;display: block;\">Dopamine\u003C/span>\u003C/h3>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Deep within the midbrain of vertebrates, there is a small cluster of dopamine neurons that send signals to multiple regions of the brain. Dopamine is associated with reinforcement and serves as the brain's pleasure signal. Dopamine activity increases when unexpected rewards appear, and decreases when expected rewards do not materialize.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006800\" data-ratio=\"0.9351851851851852\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782181770.027769668109767887.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Experiments found that cues predicting food arrival in 4 seconds trigger more dopamine release than those predicting food in 16 seconds, a phenomenon known as discounting. This principle was later incorporated into TD learning, driving AI systems to choose actions that obtain rewards faster.\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Additionally, signals indicating a 75% probability of food trigger more dopamine release than those indicating a 25% probability, a mechanism also introduced into TD learning.\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>It is important to note that dopamine is not a reward signal but a reinforcement signal. Reinforcement and reward must be decoupled for reinforcement learning to work effectively. To reasonably address the time credit allocation issue, the brain must reinforce behavior based on predicted future reward changes rather than actual rewards. This evolution began gradually with vertebrates.\u003C/p>\u003Ch3 style='margin-top: 30px;margin-bottom: 15px;color: rgba(0, 0, 0, 0.85);;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);'>\u003Cspan style=\";font-size: 20px;color: rgb(0, 0, 0);line-height: 1.5em;letter-spacing: 0em;font-weight: bold;display: block;\">Basal Ganglia and Hypothalamus\u003C/span>\u003C/h3>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>The mechanism of reinforcement learning originates from the ancient interaction between the basal ganglia and hypothalamus. The specific process is as follows:\u003C/p>\u003Col style='margin-top: 8px;margin-bottom: 8px;;padding-left: 25px;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);' class=\"list-paddingleft-1\">\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">\u003Cp style=\";color: rgb(0, 0, 0);line-height: 1.8em;letter-spacing: 0em;text-indent: 0em;padding-top: 8px;padding-bottom: 8px;\">: Initially controlled by the hypothalamus. The hypothalamus retains ancestral dopamine-sensitive neurons responsible for categorizing external stimuli as good or bad and triggering corresponding reflexive reactions.\u003C/p>\u003C/section>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">\u003Cp style=\";color: rgb(0, 0, 0);line-height: 1.8em;letter-spacing: 0em;text-indent: 0em;padding-top: 8px;padding-bottom: 8px;\">: The hypothalamus only responds to actual rewards and does not become excited by predictive signals. Therefore, it can only react when real rewards arrive.\u003C/p>\u003C/section>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">\u003Cp style=\";color: rgb(0, 0, 0);line-height: 1.8em;letter-spacing: 0em;text-indent: 0em;padding-top: 8px;padding-bottom: 8px;\">: The hypothalamus's reward neurons control dopamine release by connecting with clusters of dopamine neurons in the basal ganglia. When the hypothalamus senses pleasure, it releases large amounts of dopamine to the basal ganglia; when it senses discomfort, it inhibits dopamine release.\u003C/p>\u003C/section>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006801\" data-ratio=\"0.7033492822966507\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1045\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782197480.914970883534463.jpeg\">\u003C/p>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">\u003Cp style=\";color: rgb(0, 0, 0);line-height: 1.8em;letter-spacing: 0em;text-indent: 0em;padding-top: 8px;padding-bottom: 8px;\">: The basal ganglia contains two parallel circuits:\u003C/p>\u003C/section>\u003C/li>\u003C/ol>\u003Cul style=\"margin-top: 8px;margin-bottom: 8px;;list-style-type: disc;padding-left: 25px;color: rgb(0, 0, 0);\" class=\"list-paddingleft-1\">\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">One circuit connects to the motor system, controlling body movements and reinforcing these actions by repeatedly triggering dopamine release.\u003C/section>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">Another circuit connects to dopamine neurons, focusing on predicting future rewards and actively triggering dopamine activation.\u003C/section>\u003C/li>\u003C/ul>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">\u003Cp style=\";color: rgb(0, 0, 0);line-height: 1.8em;letter-spacing: 0em;text-indent: 0em;padding-top: 8px;padding-bottom: 8px;\">: Initially, the basal ganglia relied on feedback from the hypothalamus for learning. Over time, they gradually learned to self-judge, recognizing their own errors before hypothalamus feedback. This is why dopamine neurons initially respond to the first reward but over time shift their response to predictive reward cues.\u003C/p>\u003C/section>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">\u003Cp style=\";color: rgb(0, 0, 0);line-height: 1.8em;letter-spacing: 0em;text-indent: 0em;padding-top: 8px;padding-bottom: 8px;\">: The basal ganglia repeats behaviors that maximize dopamine release, consistent with Sutton's \"actor\" theory. This system aims to reinforce behaviors that lead to positive outcomes while inhibiting those that result in punishment.\u003C/p>\u003C/section>\u003C/li>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Through this mechanism, the basal ganglia and hypothalamus together construct the reinforcement learning system of vertebrates.\u003C/p>\u003Ch3 style='margin-top: 30px;margin-bottom: 15px;color: rgba(0, 0, 0, 0.85);;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);'>\u003Cspan style=\";font-size: 20px;color: rgb(0, 0, 0);line-height: 1.5em;letter-spacing: 0em;font-weight: bold;display: block;\">Pattern Recognition\u003C/span>\u003C/h3>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>From invertebrates evolving to vertebrates, animals began to possess brain structures capable of utilizing decoding neuron patterns to recognize objects, greatly expanding their perceptual range. In a universe with only fifty olfactory neurons, these neurons can identify different patterns. Just fifty cells can represent up to one hundred trillion patterns.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006802\" data-ratio=\"1.120866590649943\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"877\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782185090.6854237259613054.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Pattern recognition faces two main challenges:\u003C/p>\u003Col style='margin-top: 8px;margin-bottom: 8px;;padding-left: 25px;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);' class=\"list-paddingleft-1\">\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">: How to distinguish overlapping patterns as different patterns.\u003C/section>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006803\" data-ratio=\"0.45883534136546184\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"996\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782184560.6018349249841528.jpeg\">\u003C/p>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">: How to generalize already recognized patterns to identify similar but not identical new patterns.\u003C/section>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006804\" data-ratio=\"0.49157581764122893\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1009\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782191080.4049739260096952.jpeg\">\u003C/p>\u003C/li>\u003C/ol>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>In the AI field, supervised learning and backpropagation algorithms are applied to image recognition, natural language processing, speech recognition, and autonomous driving cars, effectively addressing the above two challenges.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006805\" data-ratio=\"0.5332671300893744\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1007\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782199950.39147022759281636.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>However, the brain uses unsupervised learning and does not rely on backpropagation; it addresses pattern recognition challenges through other mechanisms.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006806\" data-ratio=\"1.2950191570881227\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"783\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782187250.4605634316343401.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>For example, olfactory neurons send signals to pyramidal neurons in the cerebral cortex, involving the following two interesting characteristics:\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006807\" data-ratio=\"0.20555555555555555\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782197360.7749604728865642.jpeg\">\u003C/p>\u003Col style='margin-top: 8px;margin-bottom: 8px;;padding-left: 25px;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);' class=\"list-paddingleft-1\">\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">: A few olfactory neurons connect to a much larger number of cortical neurons, greatly expanding the space for information processing.\u003C/section>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">: A specific olfactory neuron connects only to a subset of cortical cells, not all cells.\u003C/section>\u003C/li>\u003C/ol>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>These two seemingly simple wiring features may effectively solve the\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>—the cerebral cortex can recognize similar but different patterns.\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>However, just like the learning process in vertebrate brains, when neural networks learn new knowledge, they may forget old knowledge. That is, learning new patterns may interfere with previously learned ones. Therefore, like some models in AI, all content must be learned at once, after which learning stops (locking all parameters).\u003C/p>\u003Ch3 style='margin-top: 30px;margin-bottom: 15px;color: rgba(0, 0, 0, 0.85);;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);'>\u003Cspan style=\";font-size: 20px;color: rgb(0, 0, 0);line-height: 1.5em;letter-spacing: 0em;font-weight: bold;display: block;\">CNN\u003C/span>\u003C/h3>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Visual objects activate different neuron patterns when rotated, moved closer or farther, or positioned differently, leading to the so-called \"invariance problem\": how to recognize the same object despite changes in input (such as the two images below). The brain somehow solves this problem.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006808\" data-ratio=\"0.3285123966942149\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"968\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782189920.7555367314788957.jpeg\">\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006809\" data-ratio=\"0.45524017467248906\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"916\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782187010.8002557725333332.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>David Hubel and Torsten Wiesel discovered the hierarchical mechanism of visual processing by showing different visual stimuli to cats and recording neuronal activity. The first area to receive visual signals is V1 (the primary visual area). They found that neurons in V1 are very sensitive to specific line orientations and positions. For example, some neurons only react to vertical lines, while others react to horizontal lines or 45-degree angle lines. V1 acts as a map of the entire visual field of the cat, with different neurons corresponding to different positions and directions of lines.\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>The visual system starts from V1, breaking down complex visual patterns into simple lines and edges. Then, the output from V1 is passed to higher-level areas such as V2, V4, and finally IT. In this hierarchy, as the processing level rises, neurons become increasingly sensitive to more complex features—V1 handles basic lines, V2 and V4 handle more complex shapes, and IT identifies whole objects such as faces. V1 is only sensitive to inputs in specific regions of the visual field, while IT can recognize objects throughout the entire visual field. This process of integrating simple features into complex objects solves the visual invariance problem.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006810\" data-ratio=\"0.35185185185185186\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782195990.24440699630052287.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Hubel and Wiesel's two major discoveries:\u003C/p>\u003Col style='margin-top: 8px;margin-bottom: 8px;;padding-left: 25px;color: rgb(0, 0, 0);font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;font-size: 16px;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);' class=\"list-paddingleft-1\">\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">Visual processing is hierarchical, with low-level neurons identifying simple features and high-level neurons identifying complex objects.\u003C/section>\u003C/li>\u003Cli style=\";\">\u003Csection style=\";margin-top: 5px;margin-bottom: 5px;color: rgb(1, 1, 1);line-height: 1.8em;letter-spacing: 0em;\">Neurons at the same level are sensitive to the same features but responsible for different input positions.\u003C/section>\u003C/li>\u003C/ol>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Inspired by these findings, Fukushima proposed the concept of convolutional neural networks (CNNs). Like V1, CNNs first break down input images into feature maps, each showing the location of a specific feature (such as vertical or horizontal lines) in the input image. This process is called convolution.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006811\" data-ratio=\"0.33425925925925926\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"1080\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782181690.10898892921341896.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Fukushima's innovation lay in introducing \"inductive bias,\" assumptions built into the system during design. CNNs assume that the same features in different positions should be treated identically, solving the translation invariance problem. By encoding this rule directly into the network architecture, CNNs can efficiently learn and process visual information without requiring large amounts of data and time to manually learn this rule.\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>Comparative psychologist Carolyn Delong trained goldfish to click on pictures to obtain food to study their cognitive abilities. She showed the goldfish two pictures, and whenever they clicked on the frog picture, they were rewarded with food. Soon, the goldfish learned to swim towards the frog picture whenever they saw it. Next, Delong changed the experiment, showing a picture of the same frog from an angle the goldfish had never seen. Surprisingly, the goldfish swam towards the new frog picture, clearly able to immediately recognize the frog. This shows that the fish's brain surpasses even our most advanced computer vision systems in some aspects. CNNs need large amounts of data to understand object rotation and 3D changes, but fish seem to instantly recognize new angles of objects.\u003C/p>\u003Ch3 style='margin-top: 30px;margin-bottom: 15px;color: rgba(0, 0, 0, 0.85);;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;letter-spacing: normal;text-align: left;text-wrap: wrap;background-color: rgb(255, 255, 255);'>\u003Cspan style=\";font-size: 20px;color: rgb(0, 0, 0);line-height: 1.5em;letter-spacing: 0em;font-weight: bold;display: block;\">World Models\u003C/span>\u003C/h3>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>The evolution of the semicircular canals originated in early vertebrates, appearing almost simultaneously with reinforcement learning and the ability to build spatial maps. Vestibular sensation is crucial for building spatial maps. In the hindbrain of vertebrates, whether in fish or mice and other species, there are \"head direction neurons\" that only fire when the animal is facing a specific direction. These neurons integrate visual and vestibular inputs to form a neural compass, allowing vertebrate brains to simulate and navigate three-dimensional space.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006813\" data-ratio=\"1.1245972073039743\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"931\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782181800.09676021140559188.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>The medial cortex is a part of the cerebral cortex that later evolved into the hippocampus in mammals. If you record the neural activity of the hippocampus while fish swim around, you will find that some neurons only activate when the fish is in a specific spatial position; others activate when the fish approaches the edge of the tank or faces a certain direction. Visual, vestibular, and head direction signals converge in the medial cortex, mixing here and transforming into a spatial map.\u003C/p>\u003Cp style=\"text-align: center;\">\u003Cimg class=\"rich_pages wxw-img js_insertlocalimg\" data-imgfileid=\"100006812\" data-ratio=\"0.9812734082397003\" data-s=\"300,640\" data-type=\"jpeg\" data-w=\"801\" style=\"\" src=\"https://res.cooltool.vip/article_res/assets/17423782181810.09450237682887819.jpeg\">\u003C/p>\u003Cp style='margin-bottom: 0px;;color: rgb(0, 0, 0);font-size: 16px;line-height: 1.8em;letter-spacing: normal;text-align: left;padding-top: 8px;padding-bottom: 8px;font-family: Optima, \"Microsoft YaHei\", PingFangSC-regular, serif;text-wrap: wrap;background-color: rgb(255, 255, 255);'>The most important breakthrough in this process is the brain's construction of an internal model—a representation of the external world. Initially, this model might have only helped the brain identify arbitrary positions in space and calculate the correct direction from any starting point to the target. But the construction of this internal model laid the foundation for the brain's future evolution. It evolved from an initial tool for remembering spatial positions into more complex functions.\u003C/p>\u003Cp style=\"display: none;\">\u003Cmp-style-type data-value=\"3\">\u003C/mp-style-type>\u003C/p>\u003C/div>",1752585449334]