Crypto News

Acknowledgments: Funding and Support for Explanatory Feedback Research

Abstract and 1 Introduction

2. Background

2.1 Effective Tutoring Practice

2.2 Feedback for Tutor Training

2.3 Sequence Labeling for Feedback Generation

2.4 Large Language Models in Education

3. Method

3.1 Dataset and 3.2 Sequence Labeling

3.3 GPT Facilitated Sequence Labeling

3.4 Metrics

4. Results

4.1 Results on RQ1

4.2 Results on RQ2

5. Discussion

6. Limitation and Future Works

7. Conclusion

8. Acknowledgments

9. References

APPENDIX

A. Lesson Principles

B. Input for Fine-Tunning GPT-3.5

C. Scatter Matric of the Correlation on the Outcome-based Praise

D. Detailed Results of Fine-Tuned GPT-3.5 Model’s Performance

8. ACKNOWLEDGMENTS

This work is supported by funding from the Richard King Mellon Foundation (Grant #10851) and the Learning Engineering Virtual Institute (https://learning-engineering-virtu al-institute.org/). Any opinions, findings, and conclusions expressed in this paper are those of the authors. We also wish to express our gratitude to Dr. Ralph Abboud and Dr. Carolyn P. Ros´e for their invaluable guidance and recommendations, and to Yiyang Zhao and Yuting Wang for their assistance in verifying the rating scheme.

This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Jionghao Lin, Carnegie Mellon University ([email protected]);

(2) Eason Chen, Carnegie Mellon University ([email protected]);

(3) Zeifei Han, University of Toronto ([email protected]);

(4) Ashish Gurung, Carnegie Mellon University ([email protected]);

(5) Danielle R. Thomas, Carnegie Mellon University ([email protected]);

(6) Wei Tan, Monash University ([email protected]);

(7) Ngoc Dang Nguyen, Monash University ([email protected]);

(8) Kenneth R. Koedinger, Carnegie Mellon University ([email protected]).

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button