1、IDC发布中国机器人出海市场分析报告
11月1日,全球领先的IT市场研究和咨询机构国际数据公司(IDC)最新发布《中国机器人出海市场分析,2024:扬帆出海,破浪前行》(IDC #CHC52664424,2024年10月)报告,报告重点关注了中国工业机器人、商用服务机器人的出海进展,探讨了机器人厂商出海优势、区域分布、典型模式、主要厂商,分析面临挑战并给出建议参考。
▍中国工业机器人出海市场情况
出海区域分布
2023年,中国工业机器人厂商的出海收入合计约95.8亿元人民币,主要市场区域在亚太、欧洲、北美等地区,这些区域市场贡献了中国工业机器人厂商境外收入的90%。
▍2023年中国协作机器人出海收入情况
协作机器人是中国机器人厂商出海的新兴热门领域。2023年,中国协作机器人厂商出海收入总计超3.8亿元人民币。越疆、大族机器人、遨博、节卡等厂商积极布局,重点拓展海外业务。其中,越疆海外业务起步较早,目前其海外市场营收占比过半,领先于中国其他协作机器人厂商。
▍中国商用服务机器人出海市场情况
出海区域分布
2023年,中国商用服务机器人厂商的出海收入合计约15.1亿元人民币,主要市场区域在亚太、欧洲地区,这些区域市场贡献了中国商用服务机器人厂商90%以上的境外收入。
亚太区域市场收入占据了七成的份额。其中,日韩市场收入占比约 62.5%,日韩受人口老龄化、人力成本高等影响,且在地理文化上与中国接近,市场接受度更高,是中国商用服务机器人出海的重点区域。
▍2023年中国商用服务机器人出海收入情况
根据IDC报告显示,中国商用服务机器人厂商是机器人出海的先行军,近几年大力拓展海外市场并获得迅猛增长,头部企业海外收入均已高于中国市场收入,海外业务已经成为重要业绩增长点。 例如,头部企业如擎朗智能、高仙机器人,在海外市场的业务收入已经与国内收入持平甚至更高。
擎朗智能近年来积极进行本地化适应获得了快速成长, 2023年在日本、韩国及欧洲市场分别实现240%、100%、50%的增速,擎朗智能餐饮配送机器人出海收入以44.8%的占比位居中国厂商之首。
高仙机器人专注于商用清洁机器人领域,近年来在出海方面持续加大投入,通过系统化的运营持续获得市场增长。
普渡机器人出海较早,以餐饮配送机器人、商用清洁机器人等形成多元化产品布局,海外市场已成为其业务重点。
中国机器人出海的三大优势
中国机器人出海的典型模式
IDC给中国机器人厂商的建议
文章链接:
https://www.idc.com/getdoc.jsp?containerId=prCHC52694824
文章来源:IDC咨询
2、哪些具身智能研究方向容易受到审稿人青睐?来看CoRL 2024收录情况
CoRL 2024 是机器人学习会议(Conference on Robot Learning),是聚焦于机器人学与机器学习交叉领域的年度国际会议。CoRL2024 的成果涉及多个方面,包括机器人手部触觉、机器人移动操作、运动控制与学习、导航、人机交互、远程操作等领域。
在手部触觉方面,有展示学习人工智能实现的灵巧操作;拟人化机器人手的远程操作;实现重力不变的手持物体旋转并结合模拟到真实的触摸技术;在开源灵巧机器人手上展示从人类学习;基于软视觉的指尖触觉感知;多功能机器人操作远程操作系统;通过手持声学振动实现物体感知;混合软刚性机器人平台通过演示学习可推广技能;通过视觉触觉传感学习精细操作。在移动操作方面,有展示在动态环境中的长视距移动操作及零样本、随处部署的操作策略,还有开源全向移动操作机器人用于机器人学习,以及实现边缘设备上自主移动机器人的实时、稳健 3D 映射、导航和语义分割。在运动控制与学习方面,有通过离线数据集的扩散实现实时腿部运动控制;人形机器人的跑步和跳跃;四足机器人的零样本安全以及在具有挑战性的不连续地形上敏捷跳跃。在人机交互方面,有多功能仿生类人机器人头部用于沉浸式人机交互。在远程操作方面,有通过沉浸式增强现实实现伸展控制;用于富有表现力的全臂远程操作的运动学重定向算法;具有沉浸式主动视觉反馈的远程操作。本期整理了CoRL 2024部分被接收的论文,按照研究方向分类。随我们一起看看吧!
人形机器人
OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video lmitation, https://arxiv.org/abs/2410.11792
Humanoid Parkour Learning, https://arxiv.org/abs/2406.10759
Adapting Humanoid Locomotion over Challenging Terrain via Two-Phase Training , https://openreview.net/attachment?id=O0oK2bVist&name=pdf
机器人学习、规划
Theia: Distilling Diverse Vision Foundation Models for Robot Learning , https://arxiv.org/pdf/2407.20179
BodyTransformer:Leveraging RobotEmbodimentforPolicyLearning , https://openreview.net/pdf?id=Oce2215aJE
Gameplay Filters: Robust Zero-Shot Safety through Adversarial Imagination , https://openreview.net/pdf?id=Ke5xrnBFAR
Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models , https://openreview.net/pdf?id=evCXwlCMIi
Towards Open-World Grasping with Large Vision-Language Models , https://openreview.net/pdf?id=QUzwHYJ9Hf
Safe Bayesian Optimization for the Control of High-Dimensional Embodied Systems , https://openreview.net/pdf?id=8PcRynpd1m
LeLaN: Learning A Language-Conditioned Navigation Policy from In-the-Wild Videos , https://openreview.net/pdf?id=zIWu9Kmlqk
Trajectory Improvement and Reward Learning from Comparative Language Feedback, https://openreview.net/pdf?id=1tCteNSbFH
Policy Adaptation via Language Optimization: Decomposing Tasks for Few-Shot Imitation, https://openreview.net/forum?id=qUSa3F79am
Learning Transparent Reward Models via Unsupervised Feature Selection , https://openreview.net/pdf?id=2sg4PY1W9d
MaIL: Improving Imitation Learning with Selective State Space Models, https://openreview.net/pdf?id=IssXUYvVTg
Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight , https://openreview.net/forum?id=bt0PX0e4rE
Autonomous Improvement of Instruction Following Skills via Foundation Models , https://openreview.net/attachment?id=8Ar8b00GJC&name=pdf
Robotic Control via Embodied Chain-of-Thought Reasoning , https://openreview.net/pdf?id=S70MgnIA0v
Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation , https://openreview.net/attachment?id=AuJnXGq3AL&name=pdf
机械臂
DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands , https://arxiv.org/abs/2310.08809
General Flow as Foundation Affordance for Scalable Robot Learning, Chengbo Yuan, Chuan Wen, Tong Zhang, Yang Gao†, https://general-flow.github.io/, CoRL 2024.
Leveraging Locality to Boost Sample Efficiency in Robotic Manipulation, Tong Zhang, Yingdong Hu, Jiacheng You, Yang Gao†, https://sgrv2-robot.github.io/, CoRL 2024.
HiRT: Enhancing Robotic Control with Hierarchical Robot Transformers, Jianke Zhang∗, Yanjiang Guo∗, Xiaoyu Chen,Yen-Jen Wang, Yucheng Hu, Chengming Shi, Jianyu Chen†, https://arxiv.org/abs/2410.05273, CoRL 2024.
Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning, Zhecheng Yuan*, Tianming Wei*, Shuiqi Cheng, Gu Zhang, Yuanpei Chen, Huazhe Xu†, https://gemcollector.github.io/maniwhere/, CoRL 2024.
RiEMann: Near Real-Time SE(3)-Equivariant Robot Manipulation without Point Cloud Segmentation, Chongkai Gao, Zhengrong Xue, Shuying Deng, Tianhai Liang, Siqi Yang, Lin Shao, Huazhe Xu†, https://riemann-web.github.io/, CoRL 2024.
ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter , https://arxiv.org/abs/2407.11298
ALOHAUnleashed: A Simple Recipe for Robot Dexterity , https://aloha-unleashed.github.io/assets/aloha_unleashed.pdf . 双臂操作。
Mobile ALOHA: Learning Bimanual Mobile Manipulation using Low-Cost Whole-Body Teleoperation , https://openreview.net/forum?id=FO6tePGRZj . 双臂操作。
RP1M: A Large-Scale Motion Dataset for Piano Playing with Bi-Manual Dexterous Robot Hands , https://openreview.net/attachment?id=4Of4UWyBXE&name=pdf . 双臂操作。
DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes , https://openreview.net/attachment?id=5W0iZR9J7h&name=pdf
导航
Uncertainty-Aware Decision Transformer for Stochastic Driving Environments,https://arxiv.org/abs/2309.16397
InstructNav: Zero-shot System for Generic Instruction Navigation in Unexplored Environment , https://arxiv.org/pdf/2406.04882
Context-Aware Replanning with Pre-explored Semantic Map for Object Navigation , https://openreview.net/attachment?id=Dftu4r5jHe&name=pdf
Lifelong Autonomous Fine-Tuning of Navigation Foundation Models in the Wild, https://openreview.net/attachment?id=vBj5oC60Lk&name=pdf
具身感知
VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding , https://arxiv.org/pdf/2410.13860 . 3D场景理解,3D视觉定位。
GraspSplats: Efficient Manipulation with 3D Feature Splatting , https://arxiv.org/html/2409.02084 .
Transferable Tactile Transformers for Representation LearningAcross Diverse Sensors and Tasks, https://arxiv.org/abs/2406.13640
D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation, https://openreview.net/attachment?id=7E3JAys1xO&name=pdf
LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting, https://openreview.net/attachment?id=MfuzopqVOX&name=pdf
自动驾驶运动规划
DriveVLM: The Convergence of Autonomous Driving and Large Vision-Language Models , https://arxiv.org/abs/2402.12289 . 用于场景描述,场景分析和分层规划。
Uncertainty-Aware Decision Transformer for Stochastic Driving Environments , https://arxiv.org/abs/2309.16397 . 提出了 UNREST,一种针对随机驾驶环境的规划方法。
Hint-AD: Holistically Aligned Interpretability in End-to-End Autonomous Driving , https://arxiv.org/pdf/2409.06702
机器人操作
OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video lmitation, https://arxiv.org/abs/2410.11792
Reinforcement Learning with Foundation Priors: Let the Embodied Agent Efficiently Learn on Its Own , https://arxiv.org/abs/2310.02635
General Flow as Foundation Affordance for Scalable Robot Learning , https://arxiv.org/abs/2401.11439
A Universal Semantic-Geometric Representation for Robotic Manipulation , https://arxiv.org/abs/2306.10474
Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning, Zhecheng Yuan*, Tianming Wei*, Shuiqi Cheng, Gu Zhang, Yuanpei Chen, Huazhe Xu†, https://gemcollector.github.io/maniwhere/, CoRL 2024.
Learning to Manipulate Anywhere: A Visual Generalizable Framework For Reinforcement Learning , https://arxiv.org/abs/2407.15815
GenSim2: Scaling Robot Data Generation with Multi-modal and Reasoning LLMs , https://arxiv.org/abs/2410.03645
RiEMann: Near Real-Time SE(3)-Equivariant Robot Manipulation without Point Cloud Segmentation , https://arxiv.org/abs/2403.19460
RoboGolf: Mastering Real-World Minigolf with a Reflective Multi-Modality Vision-Language Model , https://arxiv.org/abs/2406.10157
Continuously Improving Mobile Manipulation with Autonomous Real-World RL , https://continual-mobile-manip.github.io/resources/paper.pdf
Implicit Grasp Diffusion: Bridging the Gap between Dense Prediction and Sampling-based Grasping , https://openreview.net/pdf?id=VUhlMfEekm
AnOpen-Source Soft Robotic Platform for Autonomous Aerial Manipulation in the Wild , https://openreview.net/pdf?id=SfaB20rjVo
文章来源:具身智能之心
3、具身智能机器人公司星海图获超2亿元Pre-A轮融资
11月4日消息,具身智能机器人公司星海图完成超2亿元Pre-A轮融资,本轮融资由高瓴创投(GL Ventures)、蚂蚁集团领投,米哈游、无锡创投集团、同歌创投、Funplus及老股东跟投,华兴资本担任财务顾问。本轮融资完成后,星海图将继续推动具身本体及核心模组、端到端AI算法以及场景解决方案的研发及落地。
星海图自2023年9月成立,公司同时拥有两位世界级具身智能算法科学家、清华叉院助理教授的具身智能创业公司,这让公司拥有在具身智能领域感知、移动和操作方向全球最前沿的技术理解和创新能力。星海图的核心团队还集结了具有丰富创新技术产品落地及自动驾驶量产经验的实践方,能够共同攻克具身智能技术难度大、复杂度高、学科跨度广的挑战。
目前,星海图已搭建具身基础模型(Embodied Foundation Model,EFM)和空间智能引擎(Real2Sim2Real,RSR)范式。
文章来源:投资界