Video Conference and Sign Transmission

Studies carried out as part of the FORUM work on video conferencing

Frank Blades and Jim Kyle

Centre for Deaf Studies

University of Bristol

1998

 

Acknowledgements

This work was carried out in conjunction with a number of people, both deaf and hearing in the Centre for Deaf Studies and among the partners in the HORIZON-FORUM project. Developments in video conferencing are particularly important to the transmission of sign language messages and we consider this work to have been particularly informative for the HORIZON-FORUM project. Special thanks to David Jackson, Pete Hinchcliffe, Chris Stone, Paul Scott and Mick Canavan, and to the students and other workers who gave of their time. 

Frank Blades and Jim Kyle Bristol, December 1997

© CENTRE FOR DEAF STUDIES 1997

 

Project Outline

Introduction

These trials were undertaken as part of the FORUM Project in the EU-funded Horizon Programme "Deaf Studies on the Agenda". The outline of the FORUM workplan identified the area of technology as being an important area for research:

"It is often believed that problems of disabled people are addresses by providing new technology. For some people, notably deaf people, the technology is often the wrong technology"

This project workplan outlined ten objectives of FORUM. Although several of these objectives are addressed within this section of the project, the context we considered was:

training in four modes: centralised, distributed, distance and interactive.

This study examined video conferencing as an enabling technology for deaf people:

The Technology

The Centre for Deaf Studies has two video-conferencing systems. Both systems are based on computers with identical specifications - Pentium P100 systems running at 100Mhz with 16Mb of RAM. Each computer has a 17" sVGA monitor, and run under Windows 95. The PictureTel 100 video conferencing system has been installed. This will function at 128kbit/s across ISDN phone lines using a coding method compliant with the standard (see TelSign report - Kyle et al, 1997). The systems were set up at two different locations at the Centre for Deaf Studies in Bristol, approximately half a mile apart.

Aims of Trials

The aim of this component of FORUM was to evaluate the use of sign language in a video conferencing system. Our primary interest related to the problems of using sign language in a two dimensional medium as opposed to real life. Furthermore, we were interested in problems that might arise given the linguistic structure of sign language. The first set of trials was set up to determine how effective video conferencing could be for two-way interaction using sign language. It was important to see how deaf people with a range of signing skills would cope with the new medium of communication.

The second set of trials considered the feasibility of using video conferencing over longer distances. This work was undertaken in conjunction with our partners in Ghent, Belgium. Obviously the added latency of a longer distance connection is something that is important when looking at providing courses which can be accessed from further afield.

Having established the feasibility and limitations of this mode of communication, we examined case studies. We attempted two interactive real-time distance links. The first involved a virtual distance lecture. Here we set up the video conferencing system in a lecture and analysed student interaction with the on-site students and lecturer. In addition, we set up a remote tutorial between a student at CDS with his tutor based in Manchester.

In the last set of trials reported here, a more precise approach to transmission measurement was taken with specific comparisons of live and video conferenced modes with text. Variables such as clothing and camera angle were analysed.

The studies were meant to be pilot studies to allow an exploration of the issues involved in the use of video conferencing for sign transmission

Transmission of BSL

(undertaken in conjunction with the TelSign Project)

This test required four sessions and was conducted over a period of two days. Four subjects were involved in the testing. Their sign language abilities were subjectively rated by the deaf researchers in the range from "fluent" BSL (British Sign Language) to "fluent" SSE (Sign Supported English - mixed BSL and English). The tests were undertaken between the two sites at the Centre for Deaf Studies. Two subjects and a researcher were at each site.

1. Fingerspelling Test

The first test involved the fingerspelling of twenty-four words using the two handed fingerspelling alphabet. We expected that some of the letters would be more difficult to recognise than others (because of their shared formational parameters or because of the visual similarities); the words were specifically chosen to include some of those letters. The letters we expected to cause most difficulties were L M N R V (all made on the palm of the non-dominant hand) and to a lesser extent E, I and O (made on the three middle fingers of the non-dominant hand). There were groups of three-, four-, five- and six-letter words.

Table 1: Fingerspelled words from the first trial

Group One

Men

Lone

Acorn

Rot

Jobs

Liners

Run

Jeep

Jerry

Van

Musk

Arming

Vee

Mean

Roman

Shave

Bright

Vanity

Mat

Lank

Align

Story

Asleep

Lurked

Group Two

Man

Live

Alert

Rub

More

Learnt

Rim

Joke

Party

Low

Core

Arrays

Via

Moat

Liven

Honey

Rumour

Cavity

Jog

Easy

Money

Blown

Lonely

Victor

The tests were conducted as follows:

    1. Practice:

We tested the subject’s usual fingerspelling ability without the video-conferencing system by spelling out a series of words and recording the time taken to complete the test. Each subject in turn was asked to watch a project member fingerspell the words from one of the groups above, and was asked to write down the word.

The subjects were permitted to ask for the word to be re-spelled as many times as they wished. We were not concerned with the subject's understanding of the fingerspelling, or if they eventually made one or two errors when the word was written down. However in these tests only two words were recorded wrongly: one subject recorded Livers instead of Liners, and another Line instead of Live. In both cases confusion resulted from the subject mistaking the V and N letters.

    1. Trials

The tests were then repeated and subjects were asked to fingerspell the words to each other from remote locations, using the PictureTel video-conferencing system. Again, the time taken for this test to be completed was recorded. Subjects could ask for the word to be spelled again as often as they wished.

Results and Conclusions

Table 2: Fingerspelling of 24 words

Signer/
Originator

Face-to-Face

Video-conferencing

Difference

Subject A

2 min 48 sec

3 min 56 sec

1 min 8 sec

Subject B

3 min 19 sec

6 min 0 sec

2 min 41 sec

Subject C

3 min 29 sec

6 min 35 sec

3 min 6 sec

Subject D

3 min 45 sec

8 min 25 sec

4 min 40 sec

In all cases, correct recognition of the fingerspelled words took longer through video-conferencing than it did face-to-face; in one case significantly longer. In fact this was an interesting result in that the amount of difficulty presented by the video-conferencing system seemed to be in direct relation to the subjects signing skills. Subject A was the most proficient user of BSL, subjects B and C were very capable BSL users, but subject D used a great deal of English (although this was not measured and was not a feature of the test itself). Whereas subject D’s time for the face-to-face test was similar to the other subjects - although it was the slowest - the time taken for subject D to complete the test using video-conferencing was longer than for the other subjects.

As expected, subjects found some letters more difficult to recognise than others. Sign transmission was more difficult when using the video-conferencing system. Subjects reported that a much greater level of concentration was needed, and even then, subjects were forced to ask for the words to be spelled again a number of times. All subjects found the letters L, M, N and R the most difficult to recognise and words including these letters were far more likely to be misunderstood.

2. Signing Test I:

This test was similar to the first test but instead of words being fingerspelled they were signed. The purpose of this test was to ascertain whether or not recognition of signs is related to their spatial position. The design of the test was exactly the same as test 1. The two groups of words were selected so that each group contained three sets of signs:

The signs were not controlled for one or two handed signing, nor inflection, nor lip-patterns.

Table 3: Signs used for transmission

Group One

Useful

Really

Wide

Quiet

Insurance

Conference

Sly

Calm

Argue

Scar

Like

Sun

Moustache

Mine

Rain

Pretend

Operation

Snow

Group Two

Shout

Switzerland

Thunder

Sick

Farm

Encourage

Shame

Garden

Train

Clever

Angry

Clouds

Glasses

Frustrated

Expensive

Pink

Confidence

Cheap

The tests were conducted in a manner similar to test 1. Firstly, there was practice for the subjects to determine their usual signing ability without the video-conferencing system.

The subjects were permitted to ask for the sign to be repeated as many times as they wished.

In the main trials, subjects were asked to sign to each other over the video-conferencing system. Again, the time taken for subjects to receive the sign and to write it down was recorded. The subjects could ask for the word to be re-signed as many times as it was needed.

Results and Conclusions

Table 4: Time taken for correct sign recognition

 

face-to-face

Video-conferencing

Difference

Subject A

30 sec

2 min 0 sec

1 min 30 sec

Subject B

1 min 46 sec

2 min 53 sec

1 min 7 sec

Subject C

1 min 40 sec

3 min 32 sec

1 min 52 sec

Subject D

2 min 20 sec

3 min 48 sec

1 min 28 sec

In contrast to the fingerspelling test, the difference between face-to-face communication and communication using video-conferencing was less dramatic. This suggests a learning effect which was found in other trials - see below. This is especially true for subject D who had struggled with the fingerspelling, but completed the task in a similar time to subject C. It seems likely, from this test, that transmitting and recognising signs is easier than fingerspelling in a video-conferencing system.

Some additional observations were:

3. Signing Test II: Narration

The third test examined the subjects' understanding of a narration - in this case a short story or joke. The results of this test were not timed but subjects were asked to re-tell the joke or story and to repeat as much as possible of it to the researcher. The narration was signed as follows:

Results and Conclusions

The results were rather unexpected. All four subjects followed the narration with very little trouble. All were able to reproduce the story or joke faithfully and the two jokes even managed to be humorous! While seemingly trivial, this showed that subjects were able to watch the screen in a relaxed manner and understand and enjoy the information that they were receiving.

The main conclusion of this part of the test is that video conferencing is suitable for sign language communication. Subjects in this part of the test seemed to have benefited from having participated in the earlier tests - the more the video-conferencing system was used, the easier it became.

4: Signing Test III: Conversation

The fourth test involved allowing the subjects to converse in sign in a video-conference. We wanted to observe how the system would be used in a more informal situation and also whether the subjects would enjoy communicating in this way. Each pair of subjects conversed for ten minutes after which we asked questions and requested comments. The questions were:

Results and Conclusions

Overall the answers we received were very positive. All four subjects enjoyed using the video-conferencing system.

1. All subjects answered positively to the first question.

2. In answer to question two, all four subjects said that they did not find it any more tiring when communicating via a video-conferencing than through normal signing. This was surprising because our earlier tests showed that conveying the same information, when using video-conferencing, did take longer - and seemed to require more concentration and effort on the part of the users.

3. The main problems that subjects identified were:

We can see that there was a major practice effect. As people become more familiar the necessary inter-personal adjustments were made. Just as with the telephone, there has to be a learning process.

Links with Belgium

Introduction

While suggesting that signing over a short distance using video conferencing is important, what is more pressing is to look at how effective the system is over longer distances. There was an opportunity to test the video conferencing over a longer distance when members of the Horizon group went to visit Belgian partners at Prodem in Ghent. A video link between Prodem’s offices and the Centre for Deaf Studies in Bristol, was set up. Deaf project members in Bristol who had not made the trip, were able to discuss how the meeting was going in Belgium with a member of Prodem’s deaf staff.

Results

the video-meeting

Although there were some problems initially, these tended to be as a result of the use of a different languages - Belgian Sign Language is a different language from British Sign Language. However, conversation was possible.

Video conferencing at Prodem in Ghent, Belgium

Ease of use

Prodem’s system was different from that in Bristol. In Belgium, the picture was clear enough not only for the signer to see, but for the other members of the group with signing skills to follow the conversation (see Figures 1 & 2). In Bristol, the two deaf people could both join in the conversation with the Belgian signer.

effect of the delay during transmission

There was a noticeable delay in transmission of sign caused by the system. This caused problems as both deaf people would sign at the same time, not realising that the other had not finished. This was much more of a problem than previous tests in Bristol. This continued to be a problem 30 minutes after the first video contact was made.

other problems

There was also a noticeable loss of picture quality. This was evident from the initial connection, but became worse as the signers conversed. The image degradation was most evident around the small, fast-moving elements of the image. Because this was mainly the finger movements and mouth patterns, it was a problem for the signers. As a result, they found that they had to slow down their signing to convey the message.

Figure 2 = Video conferencing at Prodem in Ghent, Belgium

The cost of maintaining an ISDN link across this distance, was another factor. In addition to the basic cost of the European call (double, because ISDN uses two phone lines), there is the increased cost of slower signing, and the need for repetition where the first attempt to sign had not been understood.

These trials tended to highlight the problems of the current technology. However, they did show that deaf people could communicate across this distance. This has not been possible before and indicates the potential for development.

Remote Lectures

Introduction

This study examined a remote lecture in a video-conference. The trial involved the course, Using The Curriculum which was delivered in BSL by a deaf lecturer to deaf students. The lecturer transmitted to a deaf researcher in another building. Both systems used were identical - PictureTel 100 with P100 Pentium computers. The task of the researcher was to assess how easy it would be for a deaf student to follow the lecture, and to interact with the lecturer and the other students. Another researcher observed the lecture, live on site, assessing the impact of the presence of the system on the lecturer and on the other students.

The aim of the trial was to assess the problems arising from placing a video-conferencing system in the midst of a lecture. The lecture in question had only a small class of six students. Predicted problems were:

Trial Report:

1. Intrusion

Firstly, it should be pointed out that unlike most University lectures, almost all of the Centre for Deaf Studies lectures are recorded on video; indeed this lecture was also being recorded at the same time. This means that both the lecturer and the students are used to having technology "intruding" into their lectures. Even so, there was a great deal of excitement generated by having a computer system in the class, especially when it became clear that not only could they watch the remote user, but the remote user could also see them.

Only positive comments were noted from the students concerning this study. The positioning of the system itself caused the most intrusion. The initial set up plan was to have the video-conferencing system against the wall, 4 metres from the lecturer. This proved to be impossible as the PictureTel system has a limited zoom facility. This required the equipment to be placed in front of some students, approximately 2.5 metres from the lecturer. A different camera would be required for larger classes.

A full screen image of the remote user was set up in front of the lecturer so that he could monitor the remote user as well as the rest of the class.

Interaction

Interaction is a very important aspect of this medium. Video-conferencing allows the remote user to interact not only with the lecture content, but also with the other students.

Interaction between remote student and lecturer

The lecturer was quick to pick up on the idea that the remote user should be included in the class. It became important that the lecturer treated the remote student like any other member of the class. The lecturer had to interact with the remote student more than the rest of the students to make up for the loss of atmosphere (apparent in the live class) that the remote user was unable to access.

In addition, there were difficulties for the remote user in obtaining the lecturer’s attention. In the trial, the remote user had to wave his hands to try to attract the lecturer’s attention.

In a hearing environment this is easier as the PictureTel system has audio speakers, and a remote user can vocally reach the lecturer. In a deaf environment such as this trial where both the remote user and the lecturer were deaf, something else was needed. A flashing light controlled by the remote user would have been the best idea, especially something that can be fixed onto the local system and controlled from the remote system.

Interaction between remote student and other students

When we consider student interaction here, we do not include interaction where the students are standing in front of the class. Here the student is in effect a surrogate lecturer, and from our point of view, this should be dealt with in that way. However, our observations showed that there was also a great deal of interaction between the students throughout the lecture. The remote student missed all this. While such interaction might not be essential for the passing of information from lecturer to students, it did appear to play an important part in the student’s comprehension of that information. Thus, it can be concluded that it plays an important part in the learning process. With the set-up we had, there was no possibility of the remote student even being aware of such interaction never mind joining in.

In saying that, we already use videos as an important part of the learning process and they do not normally include any of the class interactions. How much the loss of such class interaction is important to the learning process is a separate but important discussion. However, if it is essential, then, there should be some way for the student to also see this interaction. This could done by having a second camera which films the class from the lecturer’s view. The remote user could see all of the interaction. If the remote user’s camera was controllable by the remote user, there would be the ability to pan round to all areas in the class. Neither is feasible with the current set-up although the more advanced Picture Tel system allows just this sort of control.

Interaction between remote students

Once we have overcome the problems arising from including one remote user in a lecture, we can then look at the potential for having more than one remote student. This can occur in two ways:

This raised potential problems in two areas. Firstly, we need to address to two-way communication between multiple users. It is already possible to set up a two-way multicast lecture. This is an area we will need to examine in more depth.

The Remote User

The remote user indicated problems in four main areas:

Preparation

In this case, the remote user had no preparation for the lecture. He did not have any supporting notes for the lecture, and because of the situation was also thrown into the midst of a course which he had not been following and in which he had no background knowledge. This meant that he would have to concentrate more, in order to follow what was going on. Obviously, this would not be the case for the usual remote student. However, the researcher noted that it would have been easier to follow what was going on if he had been provided with course notes to follow the lecture. The remote user should, if possible, always be provided with the necessary courseware to support the remote lectures.

Using the video-conferencing system

As the previous set of trials showed, using video-conferencing is more tiring for the remote user than the other students. This lecture lasted one and half hours and the remote user noted that it was very tiring to concentrate for that length of time. Obviously there are breaks within a lecture of this length, but perhaps these were not evident to the remote user. While the solution might be to have breaks more often, and to make sure that the remote user had a break too. In the longer term, perhaps it would be important to look at the attention span of signers compared to speakers, and to note how this is affected by using the video-conferencing system.

The use of sign language

There is always a delay involved during video conferencing transmission. For a lecture, there is no problem with this, assuming the interaction is one way, i.e. the lecture only speaking. However, this becomes a problem when there is two-way interaction. Assuming we can overcome the problem of obtaining the lecturer’s attention, all users may need to adjust to the in-built delays in information transfer in the same way that hearing people using transatlantic phone links have to get used to the delays and echoes. More tests on the speed of signing and how much of the information is transferred, will be required in future.

The use of the whiteboard

It was obvious that it was going to be difficult for the remote user to see anything which was written on the lecturer's physical whiteboard. To adjust the brightness level in the software, to maximise the view of the lecturer, it would have been necessary to have the whiteboard set very bright. This meant even with the thickest marker pens written carefully with large letters, the writing was very indistinct. This became even more of a problem when the students wrote on the board. There were three potential ways of overcoming this problem:

Camera positions

From the trial we concluded that the video-conferencing camera has to be as central as possible to the lecturer and to the physical whiteboard. As was stated earlier, the ideal position for the system was to place the camera on the top of the monitor, about 1.5 metres from the ground. In that way the lecturer was central in the picture. However, because of movement and interaction between lecturer and students, the result was that the lecturer was not always in the central area of the image and at times, off-camera altogether. This emphasises the need for the remote user to control the camera.

An ideal filming position would have been thought to be a close up to the lecturer’s head and upper torso to maximise the image without losing the signing area - i.e. a signing space image. However there are problems:

Two possible methods to overcome these problems are:

Technical Implications

Choice of clothing and backgrounds

This was predicted to be a problem but no systematic studies were carried out in this context.

The need for support

During this trial, the classroom video-conferencing system crashed, and the connection was lost for a while. While it was easy to set up again, this did raise an issue we had not previously considered. Namely that there must be a video-conferencing technician on-hand, perhaps the person who is filming the lecture, to cope with such events in a way to cause minimal interruption to the lecture.

If the break in transmission can be re-connected quickly then the loss to the remote user is minimal. However, it is also important to decide whether the lecture stops while re-connection occurs; i.e. is the remote student important enough for the lecture to stop, or should it be dealt with in the same way a student leaving to use the toilet - i.e. the lecture goes on.

Cost of maintaining the connection

This lecture took place between 9-30am and 11-00am, which is the peak rate for telephone calls. This meant that it was necessary to maintain a two-line connection (because ISDN using two lines for a video-conferencing connection) for an hour and a half. Our trial used local rate, but a remote user could be anywhere in the country - feasibly anywhere in the world. The cost of a remote lecture suddenly increases dramatically, and the implications of this should be examined.

Remote Tutorials

Introduction

The use of video-conferencing for remote tutorials is something that has great practical uses in Deaf Studies. The most obvious beneficiaries would be the Horizon trainees who would have the opportunity to discuss problems with their tutors over a video-conferencing link. This would include non-UK based trainees.

This trial was arranged to allow both the Centre for Deaf Studies and City College to assess the feasibility of running remote tutorials using video-conferencing. We were given an ideal opportunity to research this when one of our visiting researchers who is deaf, wanted to sign to his tutor based in City College, Manchester. Obviously, given that this was a "real" tutorial, rather than one set up just for this project, due to the confidential nature of the tutorial we were unable to monitor its progress and relied on feedback from the two people involved.

From this trial we wanted to assess whether:

The remote tutorial trial took place between Amy Townsend from the Centre for Deaf Studies at City College in Manchester and her student Bill Parks. City College uses a BT Olivetti system, which is based at their Wythenshawe campus on Moor Road in Manchester. Bill used the PictureTel 100 system at the Centre for Deaf Studies in Berkeley Square, Bristol.

Results

The overall results were that the trial went very well and was highly successful.

goals Achieved

Both parties felt that they had achieve a good level of communication, and while it was not as easy to sign as if they were in the same room, the benefits certainly out-weighed the problems encountered. The tutorial lasted half an hour, and they covered much more than we could have done with a minicom call.

ease of use

The image was very clear. Using a large screen PC made the remote site at full size easily large enough to sign clearly. It was not the same as face-to-face signing, but it worked a lot better than they had thought.

effect of the delay during transmission

They signed slower than usual, although not a slowly as Amy had expected they would have to. But afterwards both of them felt exhausted. There was a need to concentrate on the screen. There is also more visual interference in the video-conferencing than face to face.

Other problems

There were no problems in using two different systems. There is a standard for video-conferencing, and if both systems use this standard, then there is no incompatibility.

Conclusions

The tests showed that a reasonable quality of image was possible using the PictureTel system. This quality is sufficient for sign language communication in certain circumstances. The approximate frame rate of the reproduced images is at a level which we consider necessary for sign language communication. Some difficulty was experienced in interpretation when fast hand movements were used.

A delay of half a second was introduced in the transmission of video data for local calls, making natural conversation using both sign and voice difficult. If the scene depicted, contained motion (for example with background movement or movement by the subject), the reduction in frame rate and the block artefacts introduced reached unacceptable levels. This was made more obvious where the distance was greatly increased. The link between Ghent and Bristol showed that at current bit-rates, the use of video conferencing for signing has severe restrictions.

Improvements can be expected with the introduction of the new H.263 standard. This can give a 50% reduction in bit rate over H.261, and new algorithms are being produced which can deliver a further 50% bit rate reduction over H.263. As a result the capabilities of this system may give a fair indication of the quality of video which will be possible over 28.8 kb/s channels in when MPEG4 is introduced. There is also the fact that the system we used, the PictureTel 100 system, is already a few years old, and can no longer be considered as cutting edge technology.

 

Measuring Sign Transmission

Chris Stone and Paul Scott conducted these trials

Introduction

This trial examined the ideal conditions for using the video-conferencing PICTURETEL 100 system as a medium for sign language transmission. The trial took place on July 1997 and involved three Deaf BSL-users. The system was set up in 2 locations at CDS with the Deaf participants using the system in rotation to transmit pre-determined information.

Method

We used a book of pictures and photographs. In the book, were a series of double-page spreads, these spreads had twelve photographs or images on them, which were similar to each other. They were of varying difficulty. We asked the Deaf participants to describe an image from the double-page to the viewer at the other end of the system. This person then had to determine which image had been described. When this was complete, the participants reversed roles to repeat the exercise. This was done for each set of pictures. All four sets of pictures were used for each condition. The order was random.

The varying conditions used were:

  1. live mode - two participants sitting at a table ten feet apart
  2. remote - using the video conferencing system
  3. remote - using the video conferencing system wearing bright clothes
  4. remote - using the video conferencing system wearing patterned clothes
  5. remote - using the video conferencing system with a big window for the incoming picture
  6. remote - using the video conferencing system with the video camera at an incline of approximately 30°
  7. remote - using the video conferencing system with the whiteboard facility open and not using BSL

Please note that when the clothing is not mentioned it was plain, and, that the camera was facing the participants without an incline, other than in condition 6.

Materials

The first set of pictures was a set of street diagrams. The second was a set of photos of a group of girls in a rural setting. The third was a set of photos in a cafeteria with three people sitting at a table. The fourth was a set of photos of a group of people in a classroom.

The same four sets of photographs and images were used in all the conditions except the last one. This used two sets of pictures of maps and of various animals.

Results

The summary results are indicated below. Time was measured from when the transmission began of the description of each picture, until the sender stopped. Repetitions were the number of times the viewer requested a repeat of the information. Errors were the number of incorrect picture identifications by the viewer.

Time

Repetitions

Errors

Live

18

0.17

0.5

Plain

46

1.17

4

Bright

29

0.5

1.17

Pattern

28

0.5

1.33

Large

24

0.17

1

Angle

27

0.5

1.5

Text

72

0.22

0.44

Several specific comparisons can be made. The first is a direct comparison of live and text conversation. This concerns the first and last of the rows in the table. We can see clearly (Figure 1) that text conversation takes considerably longer than conversation which is face-to-face in BSL - around 4 times as long. Interestingly, the number of errors and repetitions are quite low in both cases. There appears to be no difference in the error rate between these two conditions.

In the same figure we can compare the videoconference performance in sign with the live signed condition. The video conference condition takes two and a half times as long. There were also many more repetitions and eight times as many errors. This can be partly attributed to the practice effects as the performance improved somewhat in the following conditions. But it is indicative of the likely performance in the early stage so signing on a video conference link.

 

Figure 2 shows the corresponding figures for the clothing conditions. Here we can see that there is a settling down and improvement in transmission performance. There are fewer errors and fewer repetitions.

 

In general terms, we can see that performance settles down in the video conference conditions and there are no differences found between bright coloured clothing and patterned clothing in terms of times taken to transmit and the extent of errors.

The same is true of the remaining conditions where the use of larger video improves performance and marginally reduces time taken to transmit. We would predict this as the benefit of a larger image should be in the viewer variable, not in the transmission variable.

Changing the angle of video camera from centre screen to side view (30 degrees) slightly increases time taken and increases error rates.

Although the study can only be treated as a pilot for a more detailed analysis, we can draw some interesting conclusions that can be used to test later performance:

 

 

This work can be used to create a set of predictions for future work on video conferencing. The technology is still in a developmental phase and there are hopes that the performance can be improved to reach levels comparable to live face to face transmission of sign.

 

Back to Report

Back to FORUM