Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 13 von 361
2007 Symposium on Underwater Technology and Workshop on Scientific Use of Submarine Cables and Related Technologies, 2007, p.94-100
2007
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Applying Point-Based Principal Component Analysis on Environment, Ships and Cetaceans Whistles Signal Classification
Ist Teil von
  • 2007 Symposium on Underwater Technology and Workshop on Scientific Use of Submarine Cables and Related Technologies, 2007, p.94-100
Ort / Verlag
IEEE
Erscheinungsjahr
2007
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • For many undersea research application scenarios, instruments need to be deployed for more than one month which is the basic time interval for many phenomena. With limited power supply and memory, management strategies are crucial for the success of data collection. For acoustic recording of undersea activities, in general, either preprogrammed duty cycle is configured to log partial time series, or spectrogram of signal is derived and stored, to utilize the available memory storage efficiently. To overcome this limitation, we come up with an algorithm to classify different sound patterns and store only the sound data of interest. Conventionally, pattern recognition for acoustic signal is done in spectral level. Features like characteristic frequencies, large amplitude of selected frequencies or intensity threshold are used to identify or classify different patterns. On main limitation for this type of approaches is that the algorithm is generally range-dependent, as a result, also sound-level-dependent. This type of algorithms will be less robust to the change of the environment. One the other hand, one interesting observation is that when human beings look at the spectrogram, they will immediately tell the difference between two patterns. Even though no knowledge about the nature of the source, human beings still can discern the tiny dissimilarity and group them accordingly. This suggests that the recognition and classification can be done in spectrogram as an image processing and recognition problem. In this work, we propose to modify PCA (Principal Component Analysis), a popular technique used in face recognition, to classify sounds of interest in the ocean. Among all different sound sources in the ocean, we focus on three categories of our interest, i.e., rain, ship and whale and dolphin. The sound data were recorded with an instrument called PAL (Passive Acoustic Listener) developed by Nystuen [1], Applied Physics Lab, University of Washington. The recording strategy is to turn on the system and picks up 4.5 seconds at the beginning of the pre-set duty cycle. Spectrogram is derived to check if certain frequency band with significant intensity exists. If so, additional 4.5 seconds clips will be recorded until the conditions cease to exist. Otherwise, the system goes to sleep mode and waits for the next duty cycle. Therefore, the final data file consists of intermittent recording of 4.5 seconds clips for most of the time. Conventional PCA takes the whole m times n face image (in our case the spectrogram) and stretch it into a long mn times 1 vector. M training images, represented by these mn times 1 vectors, are used to obtain eigen vectors. The vectors corresponding to the largest-k eigenvalues are used to construct the recognition space, thus all the images are points in this space. Face recognition is then reformulated as finding the shortest Euclidian distance between points. For our case, unlike human faces, there are no well-defined features like eyes, ears, mouth and nose. For this reason, the original spectrogram is not suitable to be used directly as the input for PCA. In stead, the invariant moments of a spectrogram image serves this purpose better. Our modified PCA uses the first seven invariant moments (all scalars) of each spectrogram image frame to constitute the 7times1 feature vector. The rest of the procedure follows exactly as the conventional PCA. Among all the data, we manually identify twenty frames for each cases, and use them as the base training set. Feed several unknown clips for classification experiments, we suggest that both point-based feature extraction are effective ways to describe whistle vocalizations and believe that this algorithm would be useful for extracting features from noisy recordings of the callings of a wide variety of species.
Sprache
Englisch
Identifikatoren
ISBN: 9781424412075, 1424412072
DOI: 10.1109/UT.2007.370947
Titel-ID: cdi_ieee_primary_4231181

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX