Facets (new session)
Description
Metadata
Settings
owl:sameAs
Inference Rule:
b3s
b3sifp
dbprdf-label
facets
http://dbpedia.org/resource/inference/rules/dbpedia#
http://dbpedia.org/resource/inference/rules/opencyc#
http://dbpedia.org/resource/inference/rules/umbel#
http://dbpedia.org/resource/inference/rules/yago#
http://dbpedia.org/schema/property_rules#
http://www.ontologyportal.org/inference/rules/SUMO#
http://www.ontologyportal.org/inference/rules/WordNet#
http://www.w3.org/2002/07/owl#
ldp
oplweb
skos-trans
virtrdf-label
None
About:
Deep neural network models of sound localization reveal how perception is adapted to real-world environments
Goto
Sponge
NotDistinct
Permalink
An Entity of Type :
schema:ScholarlyArticle
, within Data Space :
covidontheweb.inria.fr
associated with source
document(s)
Type:
Academic Article
research paper
schema:ScholarlyArticle
New Facet based on Instances of this Class
Attributes
Values
type
Academic Article
research paper
schema:ScholarlyArticle
isDefinedBy
Covid-on-the-Web dataset
title
Deep neural network models of sound localization reveal how perception is adapted to real-world environments
Creator
Francl, Andrew
Mcdermott, Josh
source
BioRxiv
abstract
Mammals localize sounds using information from their two ears. Localization in real-world conditions is challenging, as echoes provide erroneous information, and noises mask parts of target sounds. To better understand real-world localization we equipped a deep neural network with human ears and trained it to localize sounds in a virtual environment. The resulting model localized accurately in realistic conditions with noise and reverberation, outperforming alternative systems that lacked human ears. In simulated experiments, the network exhibited many features of human spatial hearing: sensitivity to monaural spectral cues and interaural time and level differences, integration across frequency, and biases for sound onsets. But when trained in unnatural environments without either reverberation, noise, or natural sounds, these performance characteristics deviated from those of humans. The results show how biological hearing is adapted to the challenges of real-world environments and illustrate how artificial neural networks can extend traditional ideal observer models to real-world domains.
has issue date
2020-07-22
(
xsd:dateTime
)
bibo:doi
10.1101/2020.07.21.214486
has license
biorxiv
sha1sum (hex)
6034523a70336e6a7b26d801d39ec92256535910
schema:url
https://doi.org/10.1101/2020.07.21.214486
resource representing a document's title
Deep neural network models of sound localization reveal how perception is adapted to real-world environments
schema:publication
bioRxiv
resource representing a document's body
covid:6034523a70336e6a7b26d801d39ec92256535910#body_text
is
schema:about
of
named entity 'cues'
named entity 'time'
named entity 'trained'
named entity 'adapted'
named entity 'PARTS'
»more»
◂◂ First
◂ Prev
Next ▸
Last ▸▸
Page 1 of 9
Go
Faceted Search & Find service v1.13.91 as of Mar 24 2020
Alternative Linked Data Documents:
Sponger
|
ODE
Content Formats:
RDF
ODATA
Microdata
About
OpenLink Virtuoso
version 07.20.3229 as of Jul 10 2020, on Linux (x86_64-pc-linux-gnu), Single-Server Edition (94 GB total memory)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2025 OpenLink Software