Chris Miller, Université du Québec à Montréal

Notationists of the world, unite!1

Why notation?
The proliferation of notation systems
Revisions to Stokoe's notation
Other notation systems
Sign notations and the International Phonetic Alphabet
What should a notation be able to do?
Transcribing multilinearity
Non-manuals
Internal vs external perspective
Alphabetic or iconic?
Pros and cons of iconically based symbols
Summing up: Where do we go from here?
References
Footnotes

Why notation?

Sooner or later, anyone who works on a sign language comes up against the question of how to put their data down on paper. The problem isn't so much one of finding a way to preserve the data in an easily accessible form: In the year 2000, videotape is already a universally available storage and playback medium and digital video production and storage using computers, as well as playback over the World Wide Web, are gradually becoming more widely available. As a result, original data is becoming more easy to access and distribute than ever before. While it is true that fairly raw data in the form of photographs or video clips can be extremely useful in many cases to point out the exact data used, it is nonetheless true that in many contexts, such use of pictures can be overkill.

What is really needed is a way to point out for later reference, both for the researcher and for the benefit of others who may need to look at the data, just what aspects of the raw linguistic signal are of interest for the problem under study. The problem is one of finding a way to take what is signed and put down into written form no more than what is strictly relevant so it is no longer necessary, when coming back to the data at a later date, to again have to dig out what is of interest from the original.

One can always describe what is being signed in prose: in some cases, it is the only way to make fully explicit what is going on. In the majority of cases, though, I think most people would agree that doing so is a waste of time, energy and ink. A much more efficient and economical method is to reduce such prose descriptions to the shorthand of a notation system tailored to the facts being described.

At another level, storing data in the form of a notation is not only an efficient way of sharing the data with other researchers in published material: it is also essential in order to manipulate one's data by computer, in databases for example. For anyone hoping to produce a sign language dictionary allowing the user to look up a sign by different aspects of its structure, a database organised along the lines of a phonological notation is almost indispensable. Similarly, having a database of signs entered in a phonological notation can make it easier for a user to automatically perform various sorts of statistical manipulations of the data. Although it is possible to store phonological descriptions in the form of prose, a notational shorthand inevitably uses up much less storage space.

Since a notation is a way of extracting from the raw data those aspects that are of interest to the researcher, any notation system is bound to be the result of a preliminary analysis, and can be expected to carry with it a certain background of assumptions and analytical prejudices. As such, no transcription system can be expected to be completely theoretically neutral. A good example of this is the way phonetic transcriptions of spoken language (normally using the International Phonetic Alphabet, IPA) represent the speech stream as a sequence of discrete segments. The fact is that at the physical level (at least), this is a misrepresentation of what is really going on. In terms of articulation, there is a continuous stream of activity, of transitions from one state to another, on the part of various speech organs working in parallel. In terms of the signal itself, there is a continuous stream of sound generated by modulations of the airflow through the speech canal; again, the sequence of discrete states represented by a segmental notation is in fact an abstraction away from the superficial physical reality.

Since a notation is inevitably a pre-analysis of the data, a sorting-out of what the researcher considers to be the relevant aspects of the signal, there is always the danger that at some point it cam be too restrictive in terms of what can be transcribed, i.e., it may prejudice possible notations in favour of a particular kind of analysis, compatible with the one which originally determined the form of the notation system. To avoid such pitfalls, some give-and-take is necessary among users; in other words, there has to be a permanent, ongoing process of discussion and revision of the system to ensure that it accommodates the needs of the largest number of users possible.

The kind of notaion that is used depends on the interests of the researcher. Thus, those who are working on syntax, discourse and semantics, not being interested (for the most part) in issues touching on the internal structure of signs, will generally prefer a gloss-based transcription of the data, whereas those who are more interested in the form of the signs themselves, i.e. in more "phonetic" or phonological aspects of the language, will naturally need a notation system that analyses signs into their constituent subparts. Although both types of transcription present similar kinds of problems for sign researchers, there are also problems that are specific to one type or the other. In this article, I will restrict myself to the problems connected with phonetic/phonological notation systems.

The proliferation of notation systems

When the first modern linguistic analyses of sign languages were published in 1960, an explicit notation for recording phonological structure was an essential part of the authors' arguments for the status of the sign languages as true languages. The very fact that one could represent signs, in written form, as being made up of a set of systematically structured subparts, was an argument for their linguistic status.

The first widely known phonological notation system for a sign language was described in Stokoe (1960) and in the Dictionary of American Sign Language on linguistic principles (Stokoe et al. 1965). This notation system has been the basis and inspiration for many others since: it was modified later on by researchers at the University of California at Berkeley (Friedman 1976, Mandel 1981), among others, to account for new observations about the structure of ASL, and further modifications were made by researchers on British, Dutch and Italian sign languages, to name but a few: as a result, where there was originally only one Stokoe notation, independent revisions in different countries have given rise to a family of notation systems. The end story is that there is no longer a single, universally accepted version of Stokoe notation in existence.

Revisions to Stokoe's notation

Some revisions of Stokoe's original notation were aimed at filling in what were perceived to be gaps in the system. Certain additions were made in order to account for aspects of the form of signs that were not provided for in the original notation. Thus, for example, current American and British notations provide a way of representing spatial orientation of the palm and fingers by means of arrows, an innovation that, not being part of Stokoe's original analysis, did not form part of the system he proposed. British notation, by means of diacritics representing each of the fingers and handparts, allows the transcriber to minutely discriminate which finger(s) and handpart(s) on the non-dominant hand are touched by the dominant hand in a sign - filling a gap in the system that is especially obvious in a sign language that makes great use of two-handed fingerspelling and signs derived from fingerspelling (where letters of the manual alphabet are often distinguished by the finger or handpart at which the dominant hand touches the non-dominant hand).

One characteristic of the original version of Stokoe notation that made it unsuitable for transcribing other sign languages or for more narrow or "phonetic" transcriptions of ASL itself was the fact that the repertoire of symbols was determined by a classical phonemic analysis of ASL structure. The need to fill in gaps in the repertoire of available handshape symbols, both for ASL and for other sign languages, motivated a great many modifications that are responsible for a large part of the divergences between systems. For example, distinct modifications using diacritics were added in different systems to allow a greater variety of handshapes to be represented. Where Stokoe uses a superscript triple dot () as a diacritic on handshape symbols to represent either curvature or bending of the fingers, the Berkeley group in the USA introduced the circumflex accent (^) to indicate bending of the fingers at the metacarpophalangeal joint (as opposed to curvature, for which they reserved the triple dot). At about the same time, British Sign Language researchers introduced the circumflex with a very different function: in BSL notation, it represents a handshape in which the selected fingers are in contact with the thumb, bending of the fingers being represented by two dots ( ¨ ). For Italian Sign Language, two diacritics were adopted for bending: a rightward arrowhead () for slight bending of the fingers (40-60° angle) and a bar () for flat bending, where the fingers form a nearly 90° angle with the plam of the hand.

For handshapes involving similar degrees of bending of the fingers, Dutch researchers introduced new base handshape symbols (e.g. and ¬). Rather than using the base handshape symbol to distinguish number of fingers and diacritics to represent curvature or bending (the strategy used in British and Italian notation), they adopted the diactritic prefix b ("baby") from American usage to indicate that besides the thumb, only the index finger was involved in the handshape. These are only some of the differences between systems derived from the original Stokoe notation: I could give many more, but these serve as an illustration of the way independent modifications of Stokoe's original notation have restructured the system in different and often incommensurable ways.

Other differences among Stokoe-based systems are due to differences in the value of manual alphabet handshapes. In the original version of the system, handshape symbols were based on the closest letter value in the North American manual alphabet or the closest number value in the ASL number system. Researchers in other countries adapted letter symbols to represent values from their own manual alphabets and number systems: for example, the Dutch handshape symbol <T> represents the handshape symbolised as <F> in American notations and the handshape represented by the Italian system's <D> has no connection with the handshape the American version of the notation represents by the same symbol. The significant differences between the Swedish manual alphabet and the North American are similarly responsible for differences in values of handshape symbols between Swedish notation and Stokoe notation (Prillwitz and Zienert 1990).

Other notation systems

It might seem that the multiplication of notation systems is a recent phenomenon, but the fact is that it has been part of sign linguistics since the beginning of the modern period. In the same year as Stokoe's (1960) Sign language structure, LaMont West, in his PhD thesis on Plains Indian Sign Language, developed another notation system based on similar phonemic principles. A significant difference between the two notations is that while Stokoe's attempts as far as possible to give its symbols a mnemonic value (i.e., handshapes represented by letters and numbers corresponding to manual alphabet and number values; action and body part symbols designed iconically), West's adopts, on an essentially arbitrary basis, printed letters and a few symbols borrowed from phonetic transcription of oral languages to represent handshape, bodypart and direction, indifferently. The gross distinctions between classes are: voiceless consonant symbols (with or without diacritics) represent handshapes (without reference to any manual alphabet values), voiced consonant symbols represent location (bodypart or spatial), vowel symbols represent spatial direction and movement types are represented by nasal consonant symbols. Probably because West's work was a description of a secondary linguistic system with a limited community of users, unlike that of Stokoe, which described the primary language of the (much larger) Deaf community of the United States and much of Canada, it received little attention among sign linguists and has thus had practically no influence on other researchers.

The current proliferation of notational systems isn't limited to the descendants of Stokoe notation, however. The 1980s and 1990s, especially, saw the appearance of a large number of original proposals. Below is a list of several such systems, with notes on some of their peculiarities: it is limited to those that I know of and is far from exhaustive. Figure 1 gives an idea in graphic form of the relationships between different notation systems.

Papaspyrou's notation
This system is proposed in Papaspyrou's 1990 PhD thesis. Like West's, it is based on alphabetic symbols and unlike Stokoe-type notations, not on mnemonic principles.

HamNoSys (Prillwitz and Vollhaber 1989, Prillwitz et al. 1990)
This system, designed with the aim of applicability to the largest number of sign languages possible, attempts maximum iconicity in its inventory of symbols, including handshapes. It provides for detailed notation of point of contact on the non-dominant hand as well as for the notation of spatial locations. Provision is made for notation of non-manuals. The system is available as a complete software package for the Macintosh operating system and a computer font for Windows.

Jouison's notation
This system, presented in Jouison (1990), is intended as a notation capable of transcribing all communicative behaviours of the signer, whether sign language as such or mime, in order not to prejudice subsequent analysis. Symbols are all invented and their mnemonic basis is internal to the system.

Laban and Benesh dance notations (Laban 1956; Benesh and Benesh 1969)
These notations are designed to record the movements of dance rather than to make detailed transcriptions of linguistic behaviour. Nevertheless, Laban has been adapted by Farnell (1990) for recording the Plains Indian Sign component of the "sign talk" used by the Nakota/Assiniboine people of Montana. The font is available for the Macintosh operating system.

Liddell and Johnson's notation (Liddell and Johnson 1989)
This notation differs from most others in two ways. Firstly, it isolates features of the sign into distinct cells of a grid; secondlly, it makes great use of abbreviations rather than single character symbols to represent the different aspects of articulation. The only exceptions to this generalisation are the symbols for movement (M) and hold (H) segments - L&J's notation places a greater emphasis on the prosodic segmentation of the sign than any other system - and handshape symbols. The notation of hanshapes differs, however, from the approach taken in most other systems: in the overall spirit of this system, they are partially decomposed into formational components. The base character represents the subset of the four fingers that are selected in a given handshape, and is followed by diacritics that indicate bending or curvature of the fingers and relationship between the thumb and the fingers. L&J notation provides a detailed method of specifying spatial coordinates for transcribing directionally and spatially modified signs.

SignFont
This is a computerised Macintosh font designed principally to be used as a writing system for ASL. Symbols are invented and partially iconic (especially handshape). It includes provisions for writing down non-manuals and spatial locations.

Sutton SignWriting (Sutton 1973, 1981)
This system is based on the combination of conventionalised iconic representations of body parts and movements into stylised drawings of signs. There are several levels of detail with which signs can be transcribed, from a detailed, "phonetic" transcription to shorthand. The fact that detailed phonetic SignWriting conflates a number of distinct symbols into a single drawing would seem to work against its use as a database-friendly transcription system since the distinct symbols, being amalgamated into a whole, lose their autonomy. Software is available for using SignWriting on both Macintosh and DOS/Windows systems.

Figure 1. Family trees for sign notation systems.

Although many notation systems have been proposed and are currently un use it seems, from an informal survey on the electronic mailing list SLLING-L in 1994, that HamNoSys and derivatives of Stokoe notation are the most widely used by researchers. This might seem to mitigate the current babel somewhat, leaving researchers an easy choice of a more iconic (HamNoSys) or a less iconic system (Stokoe and derivatives) and minimising the problem of moving between systems.

Despite appearances, there are enough significant differences between Stokoe-type systems and HamNoSys that translating between them is not always a straightforward task. For one, each system follows different syntactic rules. Where Stokoe notations follow the order of symbols location/articulator(s) > (orientation >) action, HamNoSys uses articulator(s) > orientation > location > action. Similarly, since HmNoSys and the various derivatives of Stokoe notation each represent handshapes by different combinations of base handshape symbols and diacritics, straightforward translation between systems is again difficult. The current multiplicity of notation systems is a problem not only because there are so many: the differences in the way systems are organised are important enough that each system has to be learned almost from the ground up.

Sign notations and the International Phonetic Alphabet

Compare this situation with that of spoken languages. I think it would be safe to say that most people who have done a program in linguistics have memories of phonetics class, learning to transcribe sounds from their own language or other more exotic ones, using the International Phonetic Alphabet (IPA). Certainly, anyone who has read in phonology will have a good acquaintance with the IPA or a notation that varies from the IPA only minimally.

Although notational practice is not uniform in spoken language phonology, the differences between systems are nowhere near as drastic as those between sign language notation systems. The most important differences between systems are at the level of different characters being used for a restricted set of sound types (mostly alveolo-palatal consonants and alveolar affricates, as well as certain non-cardinal vowels): it is a fairly simple mattter of learning the correspondence between a restricted set of symbols used in current American notational practice and their IPA counterparts. Otherwise, the two systems are similar where it counts. There is no problem of the basic set of symbols for segments being constructed on different assumptions, e.g. one system using place of articulation for the basic symbol with diacritics to mark manner of articulation, while the other system marks manner with base symbols and uses diacritics to mark place. And there is certainly no problem of the two systems using a different sort of syntax: one character follows another, in chronological order.

Sign language research is in real need of an "IPA" of its own. Presenting and exchanging data in a standard notation that is as widely known among sign researchers as IPA is among oral language researchers should save authors the time and effort need to produce photographs, drawings or video captures illustrating the data at hand and, at the same time, should allow researchers to present in an explicit form those aspects of the data that are truly relevant for their purposes. Just as important, with a single notation system, readers would not have to learn a new notation system in order to follow the data in a given publication.

Some systems have been explicitly proposed as international notation systems (Peng 1976, Prillwitz et al. 1989), but being the efforts of individuals or of small groups, they fail in general to take into account the needs of the largest possible number of users. Ideally, a universal sign notaiton should emerge from discussions between users rather than being presented as an idealised, hermetic system: it should be flexible enough to allow revisions without overhauling the whole system - much as the conventions of the IPA have evolved over the past century, taking into account the needs of its users.

What should a notation be able to do?

I cannot claim to know all the problems that are posed for a notation system by the structure of sign languages, but I would like to suggest in the paragraphs below a few considerations that ought to be taken into account in putting together a generally accepatable system.

Firstly, it should be remembered that no notation system is neutral. All systems are, to some degree, the product of an analysis, an interpretation that the developer of the transcription believes best represents the nature of the data. As far as possible, care should be taken in constructing any notation so that it allows users the greatest freedom possible in transcribing and presenting phenomena that are of interest. While it is not possible, in principle, to foresee all the phenomena that must eventually be accounted for, enough research has been done to date that a fairly clear picture emerges of what a sign notation must be able to do.

Most sign language notation systems I am aware of are essentially unilinear: this characteristic is inherited from our writing system. Whether a sign involves one hand or two, has both hands active or uses one as a passive/base hand, it is transcribed in all character-based systems on a single line. This works relatively well when a notation is used to transcribe single signs or sequences of single signs, as has usually been the case in the literature. However, the unilinearity of most notation systems starts to cause problems when it is necessary to represent temporally parallel behaviours, especially beyond the domain of the single sign.

In conversational signing, things occur that can't straightforwardly be accounted for using a one-line form of notation. For example, dominance reversals, holds of a sign on one hand with simultaneous signing on the other, two separate signs produced simultaneously, even signs on one hand embedded phonologically inside a two-handed sign (of which I will discuss an example below). It is possible, for example, to note on a single line different behaviours for the dominant and non-dominant hands within the stretch of a single sign by means of addditional notational conventions such as using a backslash or dominance reversal symbol, as is done in Johnston (1991). However, it is often the case that behaviours on one hand have a domain that is larger than a single co-occurring sign on the other - for example, holds or embedding of more than one sign within another - in this case attempting to squeeze everything onto a single line, with the temporally larger sign chopped up into several pieces, Procrustes style, to fit with each shorter sign on the other hand, risks giving a confusing or even false picture of the data.

It may be, for example, that a researcher wants to present the data in a way that shows how different signs on each hand behave in terms of phrasal rhythm. In Quebec Sign Language, as in American Sign Language and a number of others, the sign COMMUNICATE involves an alternating forward-backward movement of the hands before the mouth. Current notation systems conflate the relation between the two hands into a single structural unit "alternating", the symbol varying according to the system used. In terms of the internal structure of the sign, this makes sense.

But ordinary sign discourse presents us with surprising, nevertheless real, examples of phenomnena that violate the neat relationships we find inside of single, isolated signs. Just as we find examples elsewhere of two or more signs being made simultaneously, the sign COMMUNICATE is at one point in our LSQ corpus made simultaneously with the signs BAD INDEX2 on the dominant hand. What happens is that the two hands are raised from a resting position to the initial staggered positions and handshape of the beginning of COMMUNICATE, and as the non-dominant hand moves inward to the mouth location keeping the /C/ handshape of the sign, the dominant hand moves outward from the mouth, changing quickly into the /B/ handshape and downward movement of BAD, followed again quickly by the /1/ handshape and forward pointing movement of INDEX2. 2,3

Transcribing multilinearity

Now, if I want to present all this on a single line, what can I do? One possibility would be to chop up the single movement of the non-dominant hand into three parts, to show that it goes with each of three distinct behaviours on the dominant hand. In either case, the relationship in time between the two hands is not immediately obvious, and the situation only gets worse with longer stretches of two-handed signing when there are stretches of parallel signing and dominance reversals to be contended with. On the other hand, providing a line for each hand allows the relationships to be seized immediately.

This example works for non-manuals as well as for the manual signing stream itself. If everything is presented on a single line, with single behaviours chopped up into several segments to show temporal coordination, the relations between different phenomena risk being lost to the reader. If different lines are used for different phenomena, though, the relationships are immediately obvious. Such phenomena are frequent enough and of sufficient interest that any notation system must provide for a way to note left and right hand behaviours on autonomous lines.

Non-manuals

Another criticism that can be levelled against some systems is their lack of provision for transcribing non-manual behaviours, whether facial expressions, head/body movements, eye gaze and blinks or mouth behaviours including the mouthing of spoken language words. There are means available, for example, in HamNoSys and Sutton SignWriting, as well as an adaptation of Stokoe notation to the non-manual component (Edinburgh Non-manual Coding System, see Colville 1986). This latter system is attractive in that it uses already extant symbols from Stokoe notation to represent non-manual behaviours, e.g. # and _, respectively, to represent closing or opening of the eyes or the mouth, just as they are used for the handshapes at the manual level of the notation.

However, surprisingly little attention is paid to mouthing behaviours. A key to understanding and distinguishing mouthing phenomena is being able to transcribe accurately what is visibly mouthed by signers. Again, a universally consistent notation is needed in order to permit comparison across sign languages. The most obvious possibility would be to adapt IPA notation, taking into account the fact that there are fewer distinctions possible at the visual level than at the auditory level. In other words, to give one example, unless we find some visible correlate of voicing, any voiced consonant ought to be transcribed with the same symbol as its voiceless counterpart. It would also be desirable, in putting together a notation for mouthing, to avoid overlap with symbols for oral behaviours not connected with mouthing of oral language words. In this connection, it is necessary to deal with questions such as whether to represent an open mouth with, say, the symbol <a> or <_> or whether to represent spread spread and slightly open lips as < _ > or with another symbol.

Rhythmic aspects of signing are not usually treated in any depth in current notations, except for some diacritics used in different systems to indicate repetition, sharpness, tension and duration of movements. Only Liddell and Johnson's notation pays any attention to the role of holds in the articulation of signs, and I know of no notation that allows for differentiating various constant or changing rates of movement in signs, an important aspect of phonological form given the observations in Chapter 11 of Klima and Bellugi (1979).

Internal vs external perspective

One deficiency of most notations is their failure to take into account the possibility of describing sign behaviour (orientation and movement in particular) from two different perspectives, what Mandel (1981) terms "internal" or "articular" vs "external" or "geometrical" descriptions. Internal-articular descriptions represent signs in terms of anatomical states of the articulators (e.g. "attitude" is described a pronation/supination of the forearm or "movement" as flexion/extension or abduction/adduction of an articulator); external-geometrical descriptions use the vocabulary of spatial direction and movement along geometrical paths in space. Almost universally, current notations describe signing from an external-geometrical perspective; for example, "orientation" is described as facing of the palm and/or fingers in an upward/downward, forward/backward direction in space and "movement" is described as displacement of the hand(s) upward/downward, forward/backward in space and so on. This is a case where the form of the notation is largely determined by a particular analysis.

Anyone, such as myself, who is interested in the phonological regularities that involve given states of the articulators as well as those that make reference to external, geometric patters superimposed on an otherwise formless space, is stumped when it comes to transcribing these realities using current notation systems. Stokoe's original notation did make provision for transcribing certain aspects of the sign from an internal-articular perspective, as with the symbols for pronated < œ >, supinated < Œ > and raised < > forearm, but these provisions are far from exhaustive: from an internal-articular perspective, it is also desirable to be able to transcribe states such as raised/extended upper arm, and various degrees of forearm flexion/extension, to mention only a few.

A further question concerns the use of global features to capture certain movement features that refer to relations between the hands in two-handed signs, such as alternating and interchanging movement or approach, crossing and separation. These features are basically shorthand notation for mirror-image, identical actions by the two hands. Using a bilinear notation for the manual part of the signing stream leads one to ask whether these notations are really useful anymore. With alternating and interchanging movement, such shorthand notations don't seem to be necessary in a two-line transcription since need only look at what is happening on the two hands in order to see whether or not the action is the same or reversed on the two hands. However, it seems that there is a good reason for using special symbols for approaching and crossing, so it is necessary to distinguish these different cases. In two-line notation, then, "approach" and "crossing" symbols would still need to be used as orientation symbols specifying the trajectory or position of one hand with respect to the other hand taken as a location.

Alphabetic or iconic?

An issue that does not directly concern the capabilities of a notation system has to do with the type of symbols used and applies mainly to hand configuration symbols: are they iconic (i.e. reduced, conventionalised drawings), are they based on correspondences with manual alphabet and number system values, or are they chosen arbitrarily without thought of any mnemonic value?

Some pros and cons of using symbols based in a manual alphabet are:

On the plus side, they can be transferred somewhat more directly into ASCII4 form for computer storage and transmission. This is ideal both for phonological databases and for sending transcriptions through electronic mail. One proposal for a computer-readable and transmittable ASCII version of (augmented) Stokoe notation can be found in Mandel (1994). Although numerous changes must be made to translate the basic notation into computer-usable form, the extensive use of alphabetic characters in Stokoe notation means that they are not so all-encompassing as those that are necessary to convert a non-alphabet-based system for the same purposes. Furthermore, many of the problems encountered with computer-usability should eventually become a non-issue with the gradually increasing adoption of the Unicode standard, which allows for much larger and varied character sets than does ASCII.

On the negative side, letter symbols don't necessarily represent the same handshape across languages. For example the handshape that represents the letter < A > in the North American manual alphabet and mny others corresponds to < B > in the Swedish manual alphabet. There are many other examples of mismatches between original Stokoe notation values and local manual alphabet values, but perhaps the problem is not as big as it may seem. Consider the case of IPA. This notation is based (with additions) on the roman alphabet, but that doesn't stop it from being internationally accepted. Many IPA sound values for symbols don't correspond to those of a large number of roman alphabet languages.

Now, coming back to the original Stokoe notation, most of its symbols correspond to International manual alphabet values and to those of the majority of fingerspelling alphabets. Of the 44 countries or groups of countries surveyed in Carmel (1982), the vast majority share a significant number of handshape-character correspondences with the International and North American manual alphabets. (The principal differences between the two are that the North American handshapes corresponding to < F > and < T > are not used in the International alphabet since they are considered vulgar or obscene in many cultures)

Among the roman-based one-handed manual alphabets, those showing the most important differences in correspondence values are the Spanish-based alphabets, the Swedish and the Portuguese (from which the former is derived); these alphabets, nonetheless, still share a significant proportion of handshapes with values identical to those in the North American and International alphabets. Surprisingly, even two-handed alphabets include some handshapes that correspond to International one-handed values. Manual counterparts of non-roman scripts such as the Chinese, Japanese and Indian Nagari fingerspelling systems and the Israeli, Greek, Thai and Russian alphabets are based to a surprisingly large extent on roman alphabet values. The only cases of complete non-correspondence with roman values are fingerspelling systems based on the Arabic alphabet, the Ethiopian syllabary and the Korean Hangul syllabary-alphabet: in each case, handshapes are more or less iconic representations of the characters in the writing system.

It seems, then, that there is in fact a fairly large stock of handshape-letter correspondences that are shared by most of the world's manual alphabets. Even if letter values based on those of the North American manual alphabet don't provide an ideal fit with those of other languages, it should not be too difficult to come up with a system of handshape symbols based as closely as possible on those handshape-letter values that are either universal or, at least, widespread.

One possible strategy would be to adopt the approach of Liddell and Johnson's notation system (as has been done to a certain extent for notating Quebec Sign Language handshape symbols) and use letter symbols to represent basic finger sets and add diacritics to represent finger bending/curvature and finger/thumb relations. Doing so would have several advantages: firstly, structural relations between handshapes could be more directly expressed by the notaton than in many current systems and freeing up several characters this way would increase the number of characters that cluld be used otherwise in an ASCII form of the system; and the fact of handshapes being decomposed into form-based constituents would be advantageous for database organisation, since independent characters would be available to cross-reference handshapes according to a variety of salient features.

Pros and cons of iconically based symbols

From a learner's point of view, an iconic system like HamNoSys is perhaps more easy to learn due to its (relative) transparency. On the other hand, it may cause difficulty when it comes to sharing data across computer networks since transfer to ASCII characters is much less direct than with alphabetic character-based systems. Even transferring Stokoe notation into ASCII form is not an entirely straightforward task. It is true that as far as symbols other than handshapes are concerned, the question is more academic since most current notation systems rely principally on iconic characters, including those representing body parts or various aspects of movement and directionality.

Summing up: Where do we go from here?

Although one of the purposes of notation systems is to provide a common language for exchanging data between researchers, they seem to play exactly the opposite role in the field of sign linguistics. There are so many, and their internal organisation is often so different, that it is unlikely that many people have the time or energy to learn more than one of even the most widely known. Oral language linguistics has benefited from the existence of a widely adopted standard of notational practice, the IPA, which undergoes periodic revisions to take into account the many needs of users. So far, nothing similar exists for sign languages. In order to develop a truly effective and useful system it will be necessary to take into account the many needs of users. Below, I recapitulate a number of aspects that must be considered in designing a notation that is as effective and useful as possible.

An ideal notation system should allow for easy and unambiguous notation not only of individual signs, but also of running signing, and in particular:

Although multilinear transcriptions must be provided for, they are overly detailed for many purposes, especially for the transcription of single signs, and unilinear transcriptions should also remain an option. This state of affairs, the possibility of making transcriptions in either multilinear, "unfolded" format or in unilinear, "compact" format can be compared to a certain extent to the possibility of making "broad" or "narrow" phonetic transcriptions in IPA.

Care should be taken in designing a notation system to provide for a consistent way of notating behaviours beyond the manual level, including:

Symbols of the notation system should be chosen, as far as possible, for mnemonic value, whether they are iconic or alphabetic. Any bank of symbols should be translatable into ASCII format as directly as possible5 , that is to say, keeping as many characters as possible identical in the two formats and keeping syntactic conventions as similar as possible. In practical terms, this means that symbols (most logically, handshape symbols) should be alphabetic. In principle, though, I see no profound objection to the possibility of providing iconic counterparts of manual alphabet-based symbols, as long as the internal logic of handshape notation is similar in both in order to allow direct equivalences).

Most importantly, a notation must provide for the possibility of taking into account internal-articular modes of description as well as the more commonly seen external-geometric mode.

The discussions at the European Science Foundation workshop in Leiden, December 1998, were an initial step toward more widespread discussions in the sign linguistics community over the usefulness and eventual form of a standardised notation system. It is to be hoped that the workshop on the question of notation at the TISLR VII conference, July 2000 in Amsterdam, will finally provide the forum where a variety of viewpoints can interact and eventually produce a system that will be rich enough and flexible enough to satisfy the wide reange of needs of those working on linguistic aspects of sign language structure. It is really only by taking into account the needs of the widest possible range of users that a notation system can become universal in the true sense of the word.

References

Benesh, Rudolph and Joan Benesh. 1969. An introduction to Benesh movement-notation: Dance. (Revised and extended edition) New York: Dance Horizons.

Bergman, Brita. 1979. Signed Swedish. Stockholm: National Swedish Board of Education and Liber UtbildingsFörlaget.

Brien, David (ed.). 1992. Dictionary of British Sign Language/English. London: Faber and Faber.

Carmel, Simon. 1982. International hand alphabet charts. Second edition. Published by the author.

Cohen, Einya, I. M. Schlesinger and Lila Namir. 1977. A new dictionary of sign language employing the Eshkol-Wachmann movement notation system. Paris: Mouton.

Colville, Martin. 1986. The Edinburgh non-manual coding system. In Bernard Tervoort (ed.), Signs of life. Proceedings of the Second European Congress on Sign Language Research. Amsterdam: Dutch Foundation for the Deaf and Hearing Impaired Child, 204-208.

Corazza, Serena. 1990. The morphology of classifier handshapes in Italian Sign Language. In Ceil Lucas (ed.), Sign language research. Theoretical issues. Washington, DC: Gallaudet Universtity press.

Farnell, Brenda. 1990. Plains Indian sign talk: action and discourse among the Nakota (Assiniboine) people of Montana. Doctoral dissertation, Indiana University.

Friedman, Lynn. 1976. Phonology of a soundless language: Phonological structure of the American Sign Language. Doctoral dissertation, University of California, Berkeley.

Hutchins, Sandra. Howard Poizner, Marina McIntire, Don Paul and Don Newkirk. 1990. Implications for sign research of a computerized written form of ASL. In W. H. Edmondson and F. Karlsson (eds.), SLR '87. Papers from the Fourth International Symposium on Sign Language Research, Lappeenranta, Finland, July 15-19, 1987. Hamburg: Signum-Verlag, 255-268.

Johnston, Trevor. 1991. Transcription and glossing of sign language texts: Examples from Auslan (Australian Sign Language). International Journal of Sign Linguistics 2.1:3-28.

Jouison, Paul. 1990. Analysis and linear transcription of sign language discourse. In Siegmund Prillwitz and Thomas Vollhaber (eds.), Current trends in European sign language research. Proceedings of the 3rd European Congress on Sign Language Research. Hamburg: Signum-Verlag, 337-353.

Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, Massachusetts: Harvard University Press.

Kyle, Jim and Bencie Woll. 1985. Sign language. The study of deaf people and their language. Cambridge: Cambridge University Press.

Laban, Rudolph von. 1956. Principles of dance and movement notation. New York: Dance Horizons.

Liddell, Scott and Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64: 195-227.

Mandel, Mark. 1981. Phonotactics and morphophonology in American Sign Language. Doctoral dissertation, University of California, Berkeley.

Mandel, Mark. 1994. ASCII-Stokoe notation: A computer-writeable transliteration system for Stokoe notation of American Sign Language. Ms.

Newkirk, Don. 1987. SignFont handbook. Architect: Final version. San Diego, California: Salk Institute and Emerson and Stern Associates.

Papaspyrou, Chrissostomos. 1990. Gebärdensprache und universelle Sprachtheorie. Versuch einer vergleichenden generativ-transformationellen Interpretation von Gebärden- und Lautsprache sowie der Entwurf einer Gebärdenschrift. International Arbeiten zur Gebärdensprache und Kommunikation Gehörloser, Band 8. Hamburg: Signum-Verlag, pp. 344.

Peng, Fred C. C. 1976. Sign language and its notational system. In Peter Reich (ed.), The Second LACUS Forum, 1975. Columbia, South Carolina: Hornbeam Press, 188-199.

Prillwitz, Siegmund, Regina Leven, Heiko Zienert, Tomas Hanke and Jan Henning. 1990. HamNoSys. Hamburg Notation System for sign languages. An introductory guide. Hamburg: Signum-Verlag.

Prillwitz, Siegmund and Heiko Zienert. 1990. Hamburg Notation System for sign language. Development of a sign writing with computer application. In Siegmund Prillwitz and Thomas Vollhaber (eds.), Current trends in European sign language research. Proceedings of the 3rd European Congress on Sign Language Research, Hamburg, July 26-29, 1989. Hamburg: Signum-Verlag, 355-379.

Radutzky, Elena. 1989. La lingua italiana dei segni: Historical change in the sign language of deaf people in Italy. Doctoral dissertation, New York University.

Schermer, Trude. 1990. In search of a language. Influences from spoken Dutch on Sign Language of the Netherlands. Doctoral dissertation, Universiteit van Amsterdam.

Stokoe, William C. 1960. Sign language structure: An outline of the visual communication systems of the American Deaf. Studies in Linguistics Occasional Paper 8, University of Buffalo.

Stokoe, William C., Dorothy Casterline and Carl Croneberg. 1965. A dictionary of American Sign Language on linguistic principles. Silver Spring, Maryland: Linstok Press.

Sutton, Valerie. 1973. Sutton Movement Shorthand: A quick, visual, easy-to-learn method of recording dance movement.Irvine, California: Movement Shorthand Society.

Sutton, Valerie. 1981. Sign Writing for everyday use. Boston, Massachusetts: The Sutton Movement Writing Press.

West, LaMont Jr. 1960. The sign language, an analysis. Doctoral dissertation, University of Indiana.

Footnotes

1 This is a revised and updated version of "A note on notation”, which appeared in Signpost, volume 7, number 3, autumn 1994, pp. 191-202. (back to text)

2 Of course, the very symbols I am using to represent handshapes are based on North American handshape values. For readers familiar only with notation based on the Swedish manual alphabet, I would have to say /J/ instead of /B/, and for readers whose number systems use an outstretched thumb for the number 1, as in French Sign Language, I would have to symbolise this handshape as /G/, or example. (back to text)

3 In the absence of a standard notation system, I have had to use, according to my word processor’s statistics, 75 words or 382 characters to communicate the form of this approximately one-second stretch of signing. Subtract the 29 characters for the glosses of the three signs and the symbols for their respective handshapes, and that still leaves 353 characters to describe what ought to be easily notated much more simply. (back to text)

4 American Standard Code for Information Interchange (back to text)

5 That is, taking into account the fact that Unicode has not yet become the de facto universal text-encoding standard. (back to text)


Posted: 22.08.2000

List of workshop papers