Sprache:
Deutsch
Englisch

Segmentation

Segmentation of ÖGS-Texts

SL-Video available in ÖGS (Austrian SL)

Goal:

The project is directed on the segmentation of signed Austrian Sign Language texts by investigating the occurrence of segmentation cues such as pauses, non-manual elements like head nods, and others.

Methodological approach:

The operational method for segmenting a text and for deciding whether a single instance of a phenomenon has a prosodic segmentation function was implemented by a perception task which comprised a two-part sequential procedure. The annotators, non-signers as well as signers, had to segment a choice of different signed text types into segmentation units. These ÖGS-texts included informal narrations occurring in monologues and dialogues, formal narrations such as jokes and curriculum vitae occurring in monologues, and self-addressed lines of thoughts embedded in short monologue stories. Afterwards, they had to go through the units, which they segmented, and list cues on which their decision for segmenting the particular unit (and consequently the particular unit boundary) was based on.

Results according segmentation units:

The analysis shows that the participants determined on average 14 boundaries per minute. However, the most striking fact was that the Deaf native signers as well as the non-signers determined only half of the boundaries identified by L2 signers.

1. With regard to the interindividual accordance between determined unit boundaries, a first group of the identified prosodic units/boundaries was clearly identified by at least 60% of the participants. These units are larger ones, identified by longer pauses, more (co-)occurring prosodic cues, and more salient cues or movements of cues (e.g. a head nod compared to a blink).

2. The second group of possible prosodic units is composed of those units which were identified by 40%-60% of the annotators. The boundaries of these units often occurred between larger ‘highly accordant prosodic units’ and consequently, the ‘partially accordant boundaries’ mark smaller units. Also less perceptible cues such as blinks and palm-up/short pauses were named as boundary signals. Additionally, holding the position of an articulator (such as raising/furrowing the brows or holding the head in a particular position along a unit) was a cue for determining these smaller units.

Results according segmentation cues:

The manual means were labeled as rest position, palm-up, pause/hold, reduplication, and deceleration/accentuation by the annotators; The non-manual cues list comprises various head movements (including head nods, headshakes, upward/downward/sideward/forward/backward movements/positions of the head), body movements of the torso and shoulders, eyebrow movements, change of gaze direction and blinks.

Interesting detail:

A comparison between signers (including L2 signers) and non-signers shows that non-manual prosodic cues were almost entirely named only by signers, while manual cues and pauses were listed by both signers and non-signers. For instance, 90 % of all head movements, which were named as possible segmentation cues, were listed by signers, while only 10% were named by non-signers.

(2023-01-14)