楼主: mingdashike22
159 0

[电气工程与系统科学] 基于可视化模型的语义语音检索 非转录语音 [推广有奖]

  • 0关注
  • 3粉丝

会员

学术权威

79%

还不是VIP/贵宾

-

威望
10
论坛币
10 个
通用积分
71.1222
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
25194 点
帖子
4201
精华
0
在线时间
1 小时
注册时间
2022-2-24
最后登录
2022-4-15

楼主
mingdashike22 在职认证  发表于 2022-3-6 18:16:00 来自手机 |只看作者 |坛友微信交流群|倒序 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
摘要翻译:
越来越多的人对模型感兴趣,这些模型可以从无标签的语音中学习,并与视觉环境配对。这种设置与低资源语音处理、机器人技术和人类语言习得研究相关。在这里,我们研究了一个基于视觉的语音模型,在配对的场景图像和口语字幕上训练,如何捕捉语义的各个方面。我们使用外部图像标记器从图像中生成软文本标签,这些标签作为神经模型的目标,该神经模型将未转录的语音映射到(语义)关键字标签。我们引入了一个新收集的人类语义相关性判断的数据集和一个相关的任务,语义语音检索,其目标是搜索与给定文本查询语义相关的口语。在没有看到任何文本的情况下,基于并行语音和图像训练的模型在其前十个语义检索中达到了近60%的准确率。与基于转录训练的有监督模型相比,我们的模型在某些方面更好地匹配了人类的判断,尤其是在检索非逐字语义匹配方面。我们对该模型及其结果表示进行了广泛的分析。
---
英文标题:
《Semantic speech retrieval with a visually grounded model of
  untranscribed speech》
---
作者:
Herman Kamper, Gregory Shakhnarovich, Karen Livescu
---
最新提交年份:
2018
---
分类信息:

一级分类:Computer Science        计算机科学
二级分类:Computation and Language        计算与语言
分类描述:Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.
涵盖自然语言处理。大致包括ACM科目I.2.7类的材料。请注意,人工语言(编程语言、逻辑学、形式系统)的工作,如果没有明确地解决广义的自然语言问题(自然语言处理、计算语言学、语音、文本检索等),就不适合这个领域。
--
一级分类:Computer Science        计算机科学
二级分类:Computer Vision and Pattern Recognition        计算机视觉与模式识别
分类描述:Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.
涵盖图像处理、计算机视觉、模式识别和场景理解。大致包括ACM课程I.2.10、I.4和I.5中的材料。
--
一级分类:Electrical Engineering and Systems Science        电气工程与系统科学
二级分类:Audio and Speech Processing        音频和语音处理
分类描述:Theory and methods for processing signals representing audio, speech, and language, and their applications. This includes analysis, synthesis, enhancement, transformation, classification and interpretation of such signals as well as the design, development, and evaluation of associated signal processing systems. Machine learning and pattern analysis applied to any of the above areas is also welcome.  Specific topics of interest include: auditory modeling and hearing aids; acoustic beamforming and source localization; classification of acoustic scenes; speaker separation; active noise control and echo cancellation; enhancement; de-reverberation; bioacoustics; music signals analysis, synthesis and modification; music information retrieval;  audio for multimedia and joint audio-video processing; spoken and written language modeling, segmentation, tagging, parsing, understanding, and translation; text mining; speech production, perception, and psychoacoustics; speech analysis, synthesis, and perceptual modeling and coding; robust speech recognition; speaker recognition and characterization; deep learning, online learning, and graphical models applied to speech, audio, and language signals; and implementation aspects ranging from system architecture to fast algorithms.
处理代表音频、语音和语言的信号的理论和方法及其应用。这包括分析、合成、增强、转换、分类和解释这些信号,以及相关信号处理系统的设计、开发和评估。机器学习和模式分析应用于上述任何领域也是受欢迎的。感兴趣的具体主题包括:听觉建模和助听器;声波束形成与声源定位;声场景分类;说话人分离;有源噪声控制和回声消除;增强;去混响;生物声学;音乐信号的分析、合成与修饰;音乐信息检索;多媒体音频和联合音视频处理;口语和书面语建模、切分、标注、句法分析、理解和翻译;文本挖掘;言语产生、感知和心理声学;语音分析、合成、感知建模和编码;鲁棒语音识别;说话人识别与特征描述;应用于语音、音频和语言信号的深度学习、在线学习和图形模型;以及从系统架构到快速算法的实现方面。
--

---
英文摘要:
  There is growing interest in models that can learn from unlabelled speech paired with visual context. This setting is relevant for low-resource speech processing, robotics, and human language acquisition research. Here we study how a visually grounded speech model, trained on images of scenes paired with spoken captions, captures aspects of semantics. We use an external image tagger to generate soft text labels from images, which serve as targets for a neural model that maps untranscribed speech to (semantic) keyword labels. We introduce a newly collected data set of human semantic relevance judgements and an associated task, semantic speech retrieval, where the goal is to search for spoken utterances that are semantically relevant to a given text query. Without seeing any text, the model trained on parallel speech and images achieves a precision of almost 60% on its top ten semantic retrievals. Compared to a supervised model trained on transcriptions, our model matches human judgements better by some measures, especially in retrieving non-verbatim semantic matches. We perform an extensive analysis of the model and its resulting representations.
---
PDF链接:
https://arxiv.org/pdf/1710.01949
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:可视化 Architecture localization Presentation Segmentation 检索 retrieval 语音 text 转录

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-24 17:43