Quentin Gallou'edec, Edward Beeching, Cl'ement Romac, Emmanuel Dellandr'ea
{"title":"Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent","authors":"Quentin Gallou'edec, Edward Beeching, Cl'ement Romac, Emmanuel Dellandr'ea","doi":"10.48550/arXiv.2402.09844","DOIUrl":null,"url":null,"abstract":"The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research. The prevailing methodology in Reinforcement Learning (RL) typically limits models to a single task within a unimodal framework, a limitation that contrasts with the broader vision of a versatile, multi-domain model. In this paper, we present Jack of All Trades (JAT), a transformer-based model with a unique design optimized for handling sequential decision-making tasks and multimodal data types. The JAT model demonstrates its robust capabilities and versatility by achieving strong performance on very different RL benchmarks, along with promising results on Computer Vision (CV) and Natural Language Processing (NLP) tasks, all using a single set of weights. The JAT model marks a significant step towards more general, cross-domain AI model design, and notably, it is the first model of its kind to be fully open-sourced (see https://huggingface.co/jat-project/jat), including a pioneering general-purpose dataset.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"30 7","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2402.09844","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The search for a general model that can operate seamlessly across multiple domains remains a key goal in machine learning research. The prevailing methodology in Reinforcement Learning (RL) typically limits models to a single task within a unimodal framework, a limitation that contrasts with the broader vision of a versatile, multi-domain model. In this paper, we present Jack of All Trades (JAT), a transformer-based model with a unique design optimized for handling sequential decision-making tasks and multimodal data types. The JAT model demonstrates its robust capabilities and versatility by achieving strong performance on very different RL benchmarks, along with promising results on Computer Vision (CV) and Natural Language Processing (NLP) tasks, all using a single set of weights. The JAT model marks a significant step towards more general, cross-domain AI model design, and notably, it is the first model of its kind to be fully open-sourced (see https://huggingface.co/jat-project/jat), including a pioneering general-purpose dataset.
寻找一种能在多个领域无缝运行的通用模型仍然是机器学习研究的一个关键目标。强化学习(RL)中的主流方法通常将模型限制在单模态框架内的单一任务中,这种局限性与多功能、多领域模型的广阔愿景形成了鲜明对比。在本文中,我们介绍了 Jack of All Trades (JAT),这是一种基于转换器的模型,其独特的设计针对处理顺序决策任务和多模态数据类型进行了优化。JAT 模型在不同的 RL 基准上都取得了很好的性能,在计算机视觉(CV)和自然语言处理(NLP)任务上也取得了可喜的成果,所有这些都使用了单组权重,从而展示了其强大的能力和多功能性。JAT 模型标志着向更通用的跨领域人工智能模型设计迈出了重要一步,值得注意的是,它是首个完全开源的同类模型(见 https://huggingface.co/jat-project/jat),包括一个开创性的通用数据集。