3

ChatGPT passes the 2022 APCSA free response section · GitHub

 1 year ago
source link: https://gist.github.com/Gaelan/cf5ae4a1e9d8d64cb0b732cf3a38e04a
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

ChatGPT passes the 2022 AP Computer Science A free response section

For fun, I had ChatGPT take the free response section of the 2022 AP Computer Science A exam. (The exam also has a multiple-choice section, but the College Board doesn't publish this.) It scored 32/36.

Methodology

  • For each question, I pasted in the full text of the question and took the response given.
  • I tried each question once and took the response given: no cherry-picking. For readability, I've added indentation in some cases, and included method signatures where they were provided in the question and ChatGPT only provided a body. I've added question numbers; any other comments are ChatGPT's.
  • Many questions have examples containing tables or diagrams; because those don't translate well to plain text, I excluded those tables/diagrams and any text that referenced them.
  • I excluded the initial instructions at the top of the exam booklet, the "Class information for this question" boxes, and instructions about where/how to write the response in the free response booklet.
  • I clicked "reset thread" between each of the four numbered questions.
  • For questions with a part (a) and (b), I pasted in the full question up to the end of part (a), got the response for part (a), then pasted in (in the same thread) just part (b) and got the answer to that part.
    • In one case (question 1), ChatGPT provided an implementation for both parts after just getting asked for part (a), so I just took that implementation; as such, it didn't get to see the details on page 7 of the exam.

Scoring and notes

ChatGPT scored 32/36 according to my best interpretation of the College Board's scoring guidelines. It missed the following points:

  • Question 1, point 1: fails to call getPoints or goalReached on a level object (it tries to access a levels array which doesn't exist)
  • Quesiton 1, point 4: initializes to 3 instead of tripling
  • Question 1, point 6: initializes a new Game each time instead of calling methods on this (this works, and actually seems a little more sensible, but the text of the scoring guidelines seem to imply I should dock points here)
  • Question 4, point 4: off-by-one error in generating random numbers (it uses rand.nextInt(MAX - 1) + 1; nextInt's parameter is exclusive, so the highest value we could get is MAX - 1, but the question states that MAX should be inclusive)

Several questions use parts of Java that aren't part of the AP subset; fair enough, I didn't tell ChatGPT about the subset. In any case, I don't think there's a rule against that as long as the solution works.

The implementation of question 3b (collectComments()) is needlessly convoluted, but it looks like it'd work fine.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK