Member Login
E-mail:
Password:

Reset Password

 

 

Vol 19, No. 3 (May 2002)

[article | discuss (0) | print article]

BANZAI: An Application of Natural Language Processing to Web-based Language Learning

Noriko Nagata
University of San Francisco

Abstract:
This paper presents BANZAI, a new intelligent language tutor program developed by the author over the past two years. The BANZAI application is programmed in Java and runs in a web browser over the Internet. It is designed to develop learners' grammatical and sentence production skills in Japanese as well as to instill cultural knowledge about Japan. It handles Japanese characters so learners can read and produce sentences in kana and kanji. More importantly, BANZAI employs artificial intelligence (AI) and natural language processing (NLP) technology, which enables the program to read, parse, and correct sentences typed by learners. The NLP analyzer consists of a lexicon, a morphological generator, a word segmentor, a morphological parser, a syntactic parser, an error detector, and a feedback generator. The first section of this paper provides an overview of BANZAI's capability. The second and third sections briefly describe each component of the NLP analyzer and explain how the system handles student errors. The fourth section illustrates actual lessons and sample exercises provided by BANZAI. The program has been integrated into the Japanese curriculum at the University of San Francisco since the autumn of 2000. Questionnaire results indicate an enthusiastic student response.

Noriko Nagata

KEYWORDS

Intelligent Language Tutor, Natural Language Processing, Parsing, Japanese, World Wide Web

INTRODUCTION

Natural language processing (NLP) involves computational procedures for breaking down sentences into their grammatical components, a process

583

known as parsing. The profession has witnessed increasing interest in the application of NLP to computer-assisted language learning (CALL) (e.g., Sanders, 1991; Loritz, 1992; Swartz & Yazdani, 1992; Nagata, 1993, 1997; Holland, Kaplan, & Sams, 1995; Yang & Akahori, 1998; Heift, 2001; Dansuwan, Nishina, Akahori, & Shimizu, 2001). This paper presents BANZAI, a new NLP-based software package written in Java that runs in an ordinary web browser via the Internet. After a short overview of BANZAI's special features, the paper describes the NLP algorithms at the heart of BANZAI and the graphical interface through which exercises are presented to students.

OVERVIEW

The BANZAI program's NLP technology allows students to type in any sentence and to receive immediate corrections and detailed feedback concerning the grammatical errors they have committed. The importance of this feature is apparent when one compares BANZAI to traditional, non- NLP CALL programs. Without NLP technology, error correction must proceed by simple, rote matching techniques. Hence, the possible answers recognized by the program must be severely restricted since they are in traditional, multiple choice or fill-in-the-blank exercises. For example, in a multiple choice question with four responses, students may select only three different erroneous responses. Similarly, a fill-in-the-blank exercise probes only for errors concerning the word or phrase entered in the blank because the rest of the correct answer is already provided. The trouble with such exercises is that the specific errors students would have produced may not be included in the narrow range of errors recognized or admitted by the program. In contrast, BANZAI's NLP capability allows students to type any sentence in response to any exercise and can provide detailed feedback regarding any errors they produce. Hence, students can receive more frequent feedback concerning their specific weaknesses than is possible in traditional programs. For this reason, NLP technology represents a dramatic step forward for CALL (Holland, 1995).

Since the point is to allow students to freely produce language, it is important that the underlying NLP processor be capable of diagnosing a wide range of errors. The NLP technology in BANZAI can process all grammatical constructions introduced in a standard Japanese curriculum from beginning through advanced levels. This ability enables the program to detect and to provide detailed feedback concerning virtually any grammatical errors students might produce.

The pedagogical effectiveness of NLP-generated feedback has been demonstrated in controlled empirical studies. To avoid comparing apples with oranges, it makes sense to compare the effectiveness of a system employing

584

NLP technology against the very same system with the NLP capability disabled and replaced by a more traditional format (MacWhinney, 1995; Sams, 1995). The author of this paper conducted a series of empirical studies of this kind, and the results support the increased effectiveness of parser-driven intelligent feedback over traditional feedback (Nagata 1993, 1995, 1996, 1997; Nagata & Swisher, 1995). The author's recent study on the relative effectiveness of multiple choice versus production exercises based on the new BANZAI system indicates that production exercises are significantly more effective than multiple choice exercises (Nagata, 2001).

Developers of NLP-based programs have tended to focus on implementing the NLP component rather than on providing an attractive, multimedia interface for students (see Holland, 1995). This focus is partly due to the fact that computer languages of sufficient power to support NLP procedures did not provide practical support for multimedia interface development. The BANZAI program, however, is written in Java, which provides excellent support both for sophisticated NLP programming and appealing multimedia applications. As a result, the BANZAI interface is user friendly and visually appealing, making full use of digital photographs, computer graphics, pull down menus, button selections, and Japanese sounds. Each exercise in BANZAI is framed in a conversational setting, along with a relevant photographic or graphical image of Japan, and asks learners to produce a target sentence that is likely to be uttered in real communicative situations. It also supports input and output of Japanese characters so that students can gain experience using the Japanese writing system. The program now offers a series of 24 Japanese language lessons readily integrated into beginning through advanced level Japanese courses.

The program runs in an ordinary web browser over the Internet, affording web-based instruction and distance learning applications. It is very fast on standard personal computers (both PCs and Macintoshes) even when it is used over the Internet. Since BANZAI is written in Java, it is platform-independent: a significant advantage in today's evolving and increasingly heterogeneous university computer clusters.

THE BANZAI NLP ANALYZER

The BANZAI NLP analyzer consists of a lexicon, a morphological generator, a word segmentor, a morphological parser, a syntactic parser, an error detector, and a feedback generator, each of which is described here. The BANZAI system was developed from scratch, entirely by the author, including all components of the NLP analyzer.

585

Lexicon and Morphological Generator

The BANZAI lexicon contains the basic Japanese vocabulary typically encountered in a three-year Japanese curriculum and is readily expandable if necessary. For verbs, adjectives, and copulas, only the root forms are listed in the lexicon because the BANZAI morphological generator can automatically produce the stems and other base forms by attaching inflectional endings and auxiliaries to the roots. In this manner, BANZAI handles all kinds of Japanese verb, adjective, and copula conjugations with minimal lexical entries. Since the beginning lessons accept input in Roman characters, the lexicon stores separate Roman and Japanese character versions of each lexical item.

Word Segmentor

Japanese writing does not leave a space between words,1 so the BANZAI word segmentor divides the learner's input sentence (a typed character string) into lexical items, referring to the BANZAI lexicon. Several different segmentations and lexical assignments are often possible for one character string. For example, nihon can be identified as nihon 'Japan,' ni 'two' and hon (counter for long objects), or ni (particle) and hon 'book,' etc. The word segmentor finds all possible segmentations and lexical assignments.

Morphological Parser

The BANZAI morphological parser combines segmented words into noun compounds or final verb forms (if any) in the input sentence. For example, suppose the input character string is

0x01 graphic

Bangohanotukuraseraretandesu.2

The word segmentor divides this string into words: 0x01 graphic ban 'evening,' 0x01 graphic gohan 'rice,' 0x01 graphic o (object particle), 0x01 graphic tukura 'make,' 0x01 graphic se (causative form), 0x01 graphic rareta (passive form), 0x01 graphic ndesu (extended predicate form). Then, the morphological parser identifies the two nouns: 0x01 graphic ban and 0x01 graphic gohan as the compound noun 0x01 graphic bangohan 'dinner' and the verb base form 0x01 graphic tukura and the auxiliaries 0x01 graphic se, 0x01 graphic rareta, and 0x01 graphic ndesu as the final verb form 0x01 graphic tukuraseraretandesu 'was made/caused to make.'

586

Syntactic Parser

The BANZAI syntactic parser's function is to determine whether the input string is a grammatical (well formed) or ungrammatical (ill formed) sentence. The parser uses context free phrase structure rules3 to build words into phrases and phrases into sentences by means of a bottom-up parsing technique (Winograd, 1983; Matsumoto, 1988). If it cannot build all of the words in the given string into a sentence, the string is ungrammatical.

The BANZAI syntactic parser includes 14 context free phrase structure rules, and each rule is consecutively applied to process the input. The above input string now consists of three words, 0x01 graphic bangohan 'dinner,' 0x01 graphic o (object particle), and 0x01 graphic tukuraseraretandesu 'was made/caused to make.' Rule 1

NP --> N (P) (P)4

combines 0x01 graphic (N = noun) and 0x01 graphic (P = particle) into 0x01 graphic (NP = noun phrase). Rule 5

VP --> V*5

identifies 0x01 graphic (V = verb) as a verb phrase (VP). Rule 9

S --> VP

further identifies it as a sentence (S) (i.e., a verb itself can be a sentence in Japanese because the subject and the object of the sentence are often dropped in context). Then, rule 12

S --> (NP*) S

combines 0x01 graphic (NP) and 0x01 graphic (S) into a single sentence 0x01 graphic (S). At this point, the parser has succeeded in building a well formed sentence accounting for each component of the input string.

In order to rule out certain ungrammatical constructions, the phrase structure rules are programmed with additional constraints that were suppressed, for brevity, in the preceding paragraph. For example, the constraints for rule 1

NP --> N (P) (P)

587

are roughly as follows: if an NP consists of an N without any P, then the N must be a degree, manner, or time nominal; if an NP consists of an N and two Ps, then the second P must be the particle wa or mo; and so forth.

When it comes to applying NLP to CALL, it is not enough simply to say "ill formed" when the attempted parse fails. The point is to return informative feedback indicating the precise grammatical nature of the error so that students receive reinforcement regarding exactly the rules they have not yet mastered. The BANZAI error detector checks the result of each phrase structure rule and flags ungrammatical structures or inappropriate lexical items, if any. Then, the feedback generator produces detailed error messages explaining the nature of the error. The following section presents an example of a BANZAI exercise and describes the process of analyzing the student's input sentence and diagnosing errors.

PROCESS OF ERROR DIAGNOSIS BY THE NLP ANALYZER

Figure 1 summarizes the process of error diagnosis implemented in the BANZAI NLP analyzer.

Figure 1

Process of error diagnosis by the BANZAI NLP Analyzer

0x01 graphic

588

Step 1: Analysis of the Target Answer Stored in the Computer

The correct answers to a question are encoded in a simple format that is easy to input but that does not include all the detailed information required to diagnose the student's errors. The BANZAI program uses its parsing capability to generate this detailed information automatically from the simplified representation stored in the exercise. This feature facilitates the production of large exercise sets because hand coding the necessary grammatical information for each exercise would be prohibitively time consuming.

For example, the following exercise is presented in the BANZAI lesson on Japanese conditionals (the graphical interface seen by the student is not depicted here). The context for the exercise is a tour of Enoshima Island near Tokyo, of which a digital photograph is provided on the screen.

(1) Sample BANZAI exercise

You found an interesting place where couples can hang padlocks to affirm their love. Tell Ms. Satoo that when you went to the top of Enoshima, there was an interesting place.

The program stores the target answer data for each exercise. The following data set illustrates the answer data for this exercise.

(2) Correct answer data

0x01 graphic

Each word required for the answer is listed in parentheses together with its grammatical category. The verbs and adjectives are listed both in the forms required in the correct answer to the question and in their dictionary forms. The particles are listed together with their grammatical functions. These features are used when diagnosing the student's grammatical errors. If the instructor wishes to accept alternative words in the target answer (including alternative writings in hiragana and kanji), any number of alternative words can be listed within parentheses separated by slash marks (…/…/…/). Alternative phrases (consisting of more than one word) are listed within brackets separated by vertical bars <…|…|…|>. If the instructor wants to make a certain word optional, a question mark can be inserted after the optional word in the parenthesis (…?). In colloquial speech, the particles, wa, ga, and o, are sometimes dropped, and the instructor can determine whether these particles are optional, depending on

589

the style required for the exercise. Since the BANZAI parser takes care of flexible word order between nominal phrases, the author of the exercise does not have to type all alternative sentences to allow different admissible word orders. Let us suppose that, in response to the question above, the student types the string

(L) 0x01 graphic

Tyoozyoogaikeba, omosiroinotokorogaarimasita

which contains several errors. The BANZAI program first eliminates alternative or optional words that do not occur in the student's input string from the target answer data (2) to arrive at a target answer matching the lexical items present in the student's string. In this case, the version selected is

(T) 0x01 graphic

Enosima no tyoozyoo ni ikuto, omosiroi tokoro ga arimasita

The student used the kanji version of 0x01 graphic tyoozyoo and 0x01 graphic tokoro and the hiragana version of 0x01 graphic omosiroi and 0x01 graphic arimasita so the alternative lexical items in the target answer data (2) that do not match these items are all eliminated. The student did not use the sentence particle 0x01 graphic yo so this particle is eliminated as well.

Next, the BANZAI program parses the selected target answer version (T) to determine its grammatical structure, saving the author the task of entering the possible grammatical forms of the correct answer by hand. The following data set illustrates the grammatical information generated by the BANZAI parser for the target answer.

(3) Grammatical information for the target answer

0x01 graphic

590

Step 2: Analysis of the Student's Input

Now turn to the analysis of the student's response (L). First, the word segmentor consults the lexicon to divide the student's input (character string) into words and to assign grammatical roles to the words. If a word has multiple roles (e.g., the word 0x01 graphic no can be the modifier particle 0x01 graphic, the extended predicate form 0x01 graphic, or the nominal 0x01 graphic), alternatives that fail to match the correct answer data (2) are discarded. Then, the BANZAI morphological parser combines the segmented words into noun compounds or final verb forms, if any. The following display is one of the segmentations generated from the student's response (L) above.

(4) Word segmentation of the student's input

0x01 graphic

Next, the error detector checks if there are any unknown words, missing words, unexpected words, or predicate conjugation errors, using the grammatical information in (3) and the generated segmentation (4). The error detector does not find any unknown words in (4), but detects a missing word 0x01 graphic enosima and a predicate error: the first verb 0x01 graphic ikuto in the target answer is in the to-conditional form, while the first verb 0x01 graphic ikeba in the student's input in (4) is in the ba-conditional form. The feedback generator produces corresponding error messages and stores them to pass along to the student after parsing is completed.

The syntactic parser then applies each phrase structure rule consecutively to process the student's sentence, and the error detector flags ungrammatical phrases, if any, during the parsing process. For example, after rule 1, the error detector checks whether each noun phrase (NP) is followed by an appropriate particle in the student's input. The result of applying phrase structure rule 1 is listed in (5).

(5) Grammatical information generated by rule 1 for the student's input

0x01 graphic

591

0x01 graphic

The error detector compares the noun phrases in (5) with the noun phrases generated from the target answer and flags any differences found between them. For example, in the first NP in (5), 0x01 graphic tyoozyoo is marked with the subject particle 0x01 graphic ga in the student's input, while it is marked with the direction particle 0x01 graphic ni in the target answer so the feedback generator produces a corresponding particle error message. Next, when rule 3 (NP --> AP NP) is applied, the word combination 0x01 graphic omoshiroi no tokoro ga in the student's input fails to generate a noun phrase due to the intervening particle 0x01 graphic no so the attempted parse fails. The error detector flags this ungrammatical structure, and the feedback generator produces another particle error message. All errors are categorized into "missing word," "unexpected word," "particle error," "predicate error," or "word order error," and the feedback generator presents the error messages listed in (6) on the screen.

(6) Feedback in response to the student's input sentence

Missing word

0x01 graphic

Particle error

0x01 graphic

Predicate error

0x01 graphic

Try it again.

BANZAI LESSONS

The BANZAI program currently includes a series of 24 Japanese language lessons. Each lesson is devoted to a target grammatical structure and is unified by a cultural theme such as Kamakura, Izu, Kyoto, or department stores. For example, four lessons of the BANZAI series provide practice on Japanese particles and are framed within a visit to Kamakura (the medieval capital of Japan). To maintain continuity and student interest,

592

the exercises center on the activities of a range of fictional manga, cartoon style characters that are sometimes integrated into digital photographs as computer graphics.6

Figure 2 shows the first page of Particle Lesson 1 (Kamakura Tour), which introduces the two fictional characters involved in the lesson in a photograph of the Hachiman shrine in Kamakura. The first page also provides grammar notes (in the scrolling window at the bottom) so that students can review the grammatical concepts before beginning the exercises.

Figure 2

The Introductory Page of Particle Lesson 1

0x01 graphic

All lessons may be used as tutorials without a teacher's supervision. In addition to grammar notes, they include extensive exercises together with relevant vocabulary hints and detailed feedback so anyone can use BANZAI to learn or to review basic structural patterns in Japanese. The appendix at the end of this paper lists the BANZAI lessons, ordered from the elementary to the more advanced. The first 12 lessons are targeted for beginning students, and the second 12 lessons are for intermediate and advanced students; other orders are feasible, depending on the instructor's

593

curriculum. Each lesson requires about 1 hour (ranging from 30 to 70 minutes) to complete.

Each exercise provides a communicative situation resembling a daily conversation so that learners can apply BANZAI lessons to real life communication. The exercises in the Honorifics Lessons are integrated into a story about touring Kyoto (see Figure 3).

Figure 3

An exercise in Honorifics Lesson 1

0x01 graphic

Photographs of major Kyoto sights, cultural notes about Kyoto temples and streets, and traveling tips enhance the relevance of the production tasks. The context of the exercise is presented in the window at the upper left, with a photographic illustration at the upper right. Students' responses are entered in the window in the center of the display, and the feedback generated by BANZAI appears in the window just below it. Several buttons are available to learners: "Vocabulary" for looking up the words used in the current exercise, "Grammar" for reviewing the grammar note provided at the beginning of the lesson, and "Feedback" for opening the feedback window. When learners' answers are correct, the "Sound" button appears allowing them to hear the pronunciation of the correct answer. If learners fail three times to produce a correct answer, both the "Sound"

594

and "Answer" buttons are presented. Clicking the "Answer" button shows the correct answer(s).

The BANZAI exercises consist of five types of production-based tasks, designed to develop learners' grammatical and production skills through word-level, phrase- or clause-level, sentence-level, and paragraph-level exercises. Word-level exercises (asking learners to type in a word) impart confidence with word-level production (e.g., target particles and new verb forms) before learners move on to phrase-level and sentence-level production. Of course, the special strength of NLP technology is most apparent in sentence production exercises which provide students with unlimited opportunities to commit errors and to receive detailed feedback targeted to the precise nature of the error. (See, for example, the feedback messages in Figure 3 above.) Sentence-level practice is followed by paragraph-level exercises (involving reading comprehension) in which learners are presented a short paragraph that simulates writing a letter or journal entry and are asked to fill in blanks in the paragraph using the target forms of the relevant lexical items. The last type of task is listening comprehension. A sentence involving the target structure is presented orally, and students are asked to type what they have heard.

The BANZAI lessons have been integrated into the Japanese curriculum at the University of San Francisco since the autumn of 2000. A questionnaire was administered to 36 students after they worked on the Particle Lessons (Kamakura Tour). The following responses indicate the students' favorable attitudes toward BANZAI. The students' remarks pertain both to the NLP-generated feedback and to the multimedia interface.

• The content is interesting, also easy to use. Lots of pictures and explanation of where or what the pictures are about. The grammar and error message helped me to learn and understand the lesson fast and easy.

•To practice using particles is a conceptual way. The photographs and relating of them to the sentence-exercises; and the help given when wrong answer was entered—all these are great!

•I found the BANZAI program to be innovative, fascinating, a very educational. It would be a great program to use at home, or for studying for an upcoming exam.

•I like the BANZAI program very much. I have a hard time with "particles", and this program helped clarify things. It's also interesting, and keeps you alert. It's nice to have your mistakes identified, so you can learn from your mistakes.

•I really liked how the exercises were set up and the little hints here and there to tell you which particle to use. It also includes beautiful pictures and interesting descriptions. It is very easy to use and understand.

595

HARDWARE AND SOFTWARE REQUIREMENTS

The BANZAI program runs in web browsers on Windows and Macintosh operating systems. The program can be opened in both Internet Explorer and Netscape Navigator for Windows, but only in Internet Explorer for Macintosh. The reason for this restriction is that BANZAI is written in Java, and Netscape Navigator is not yet compatible with the MRJ (Mac OS Runtime for Java) on the Macintosh. (This situation may change soon.) Recent Macintosh computers come bundled with both Internet Explorer and Netscape Navigator so the restriction is not really much of a problem. Until very recently, it was not possible to get English-based operating systems to handle input and output of Japanese characters in Java applets without installing an expensive Japanese operating system. However, Windows 2000 and Macintosh OS above 9.0 are now delivered with multilanguage packs,7 and a Java plugin is already installed in those systems to run Java applets in web browsers. Thus, installing the Japanese language pack that comes with the machine now suffices for running the program. The BANZAI program also runs on older platforms equipped with any version of the Japanese Windows operating system, if the international Java plugin is installed, and on older Macintosh computers equipped with the Japanese Language Kit and the international version of MRJ. Nevertheless at this point, the best solution is simply to upgrade the whole system to Windows 2000 or Mac OS above 9.0.

NOTES

1 Beginning students sometimes wish to leave spaces between words written in Japanese characters to make it easier to understand what they are writing. BANZAI accepts such input because it ignores spaces.

2 The romanization of Japanese in this paper follows an adaptation of the Shinkunrei-shiki "New Official System," but BANZAI also accepts the Hepburn romanization (commonly used in Japan).

3 In context free grammar, the left side of phrase structure rule consists of only one nonterminal symbol.

4 Enclosing a grammatical category in parentheses means that it is optional so rule 1 NP --> N (P) (P) can be NP --> N, NP --> N P, and NP --> N P P.

5 Marking a grammatical category with star symbol means that it can occur repeatedly so rule 5 VP --> V* can be VP --> V, VP --> V V, VP --> V V V, and so forth.

6 The author took all of the photographs for BANZAI in Japan, and she created the graphics for the program in Canvas.

7 The Windows multilanguage packs can be installed on any language version of Windows 2000.

596

REFERENCES

Dansuwam, S., Nishina, K., Akahori, K., & Shimizu, Y. (2001). Development and evaluation of a Thai learning system on the Web using natural language processing. CALICO Journal, 19 (1), 67-88.

Heift, T. (2001). Learner control and error correction in ICALL: Browsers, peekers, and adamants. Paper presented at the annual CALICO Symposium, Orlando, FL.

Holland, V. M. (1995). Introduction: The case for intelligent CALL. In V. M. Holland, J. D. Kaplan, & M. R. Sams (Eds.), Intelligent language tutors. Mahwah, NJ: Lawrence Erlbaum.

Holland, V. M, Kaplan, J. D., & Sams, M. R. (Eds.). (1995). Intelligent language tutors. Mahwah, NJ: Lawrence Erlbaum.

Loritz, D. (1992). Generalized transition network parsing for language study: The GPARS system for English, Russian, Japanese, and Chinese. CALICO Journal, 10 (1), 5-22.

MacWhinney, B. (1995). Evaluating foreign language tutoring systems. In V. M. Holland, J. D. Kaplan, & M. R. Sams (Eds.), Intelligent language tutors. Mahwah, NJ: Lawrence Erlbaum.

Matsumoto, Y. (1988). 0x01 graphic [Chapter 3: Parsing Methods]. In H. Tanaka & J. Tsujii (Eds.), 0x01 graphic [Natural language understanding]. Tokyo: 0x01 graphic.

Nagata, N. (1993). Intelligent computer feedback for second language instruction. The Modern Language Journal, 77 (3), 330-339.

Nagata, N. (1995). An effective application of natural language processing in second language instruction. CALICO Journal, 13 (1), 47-67.

Nagata, N. (1996). Computer vs. workbook instruction in second language acquisition. CALICO Journal, 14 (1), 53-75.

Nagata, N. (1997). An experimental comparison of deductive and inductive feedback generated by a simple parser. System, 25 (4), 515-534.

Nagata, N. (2001). The relative effectiveness of multiple-choice and production exercises based on the BANZAI parser. Unpublished manuscript.

Nagata, N., & Swisher, M. V. (1995). A study of consciousness-raising by computer: The effect of metalinguistic feedback on second language learning. Foreign Language Annals, 28 (3), 337-347.

Sams, M. R. (1995). Advanced technologies for language learning: The BRIDGE project within the ARI language tutor program. In V. M. Holland, J. D. Kaplan, & M. R. Sams (Eds.), Intelligent language tutors. Mahwah, NJ: Lawrence Erlbaum.

Sanders, R. (1991). Error analysis in purely syntactic parsing of free input: The example of German. CALICO Journal, 9, 72-89.

Swartz, M. L., & Yazdani, M. (Eds.). (1992). Intelligent tutoring systems for foreign language learning: The bridge to international communication. London: Longman.

597

Winograd, T. (1983). Language as a cognitive process: Vol. I: Syntax. Boston: Addison-Wesley.

Yang, J. C., & Akahori, K. (1998). Error analysis in Japanese writing and its implementation in a computer-assisted language learning system on the World Wide Web. CALICO Journal, 15, 47-66.

APPENDIX

BANZAI Lessons:

1. Verb -Masu Form Lesson (At the Train Station)

2. Particle Lesson 1 (Kamakura Tour)

3. Particle Lesson 2 (Kamakura Tour)

4. Particle Lesson 3 (Kamakura Tour)

5. Particle Lesson 4 (Kamakura Tour)

6. Noun Modifier Lesson 1 (Tokyo Tour)

7. Noun Modifier Lesson 2 (Tokyo Tour)

8. Verb Direct Form Lesson 1 (At the Supermarket)

9. Verb Direct Form Lesson 2 (At the Restaurant)

10. Verb Direct Form Lesson 3 (At the Supermarket)

11. Gerund Lesson 1 (Izu Tour)

12. Gerund Lesson 2 (Izu Tour)

13. Honorifics Lesson 1 (Kyoto Tour)

14. Honorifics Lesson 2 (Kyoto Tour)

15. Honorifics Lesson 3 (Kyoto Tour)

16. Relative Clause Lesson 1 (At the Department Store)

17. Relative Clause Lesson 2 (At the Department Store)

18. Relative Clause Lesson 3 (At the Department Store)

19. Conditional Lesson 1 (Enoshima Tour)

20. Conditional Lesson 2 (Enoshima Tour)

21. Passive Construction Lesson 1 (Hakone Tour)

22. Passive Construction Lesson 2 (Hakone Tour)

23. Causative Lesson 1 (Daily Chores)

24. Causative Lesson 2 (Daily Chores)

598

ACKNOWLEDGEMENTS

I would like to thank Kevin Kelly and Ruth H. Sanders for valuable comments on preliminary drafts of this paper. I am also indebted to Kevin Kelly for his assistance with the computer graphics and for suggestions regarding the development of BANZAI. Finally, I thank Kyoko Suda for her useful comments on the BANZAI lessons and for her assistance in collecting student questionnaires.

AUTHOR'S BIODATA

Noriko Nagata (Ph.D., University of Pittsburgh) is Associate Professor and the Director of the Japanese program at the University of San Francisco. She teaches Japanese language, linguistics, and culture courses. Her research involves the development of intelligent CALL programs employing natural language processing. Her publications include descriptions of her CALL programs and a series of empirical studies examining the effectiveness of various CALL features in second language acquisition.

AUTHOR'S ADDRESS

Noriko Nagata

Department of Modern and Classical Languages

University of San Francisco

2130 Fulton Street

San Francisco, CA 94117

Phone: 415/422-6227

Fax: 415/422-6928

Email: nagatan@usfca.edu

599