Conversation
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Pull request overview
Implements the Week 11 assignment tasks by adding a Time class (Task 1) and a CLI-based AI quiz generator/game (Task 2), along with supporting npm configuration and ignores.
Changes:
- Implemented
Timeclass with getters, add* methods, and formatting. - Added AI quiz game CLI flow (fetch questions from OpenAI, run interactive quiz).
- Introduced multiple
package.json/ lockfiles plus.gitignoreupdates for Node dependencies and.env.
Reviewed changes
Copilot reviewed 8 out of 12 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| task-1/Time.js | Implements Time class logic (internal seconds storage, math, formatting). |
| task-1/index.js | Adds manual demo usage of Time APIs. |
| task-1/package.json | Adds additional dependencies (currently unused in task-1). |
| task-1/package-lock.json | Updates lockfile to reflect dependency changes. |
| task-2/index.js | CLI entrypoint: loads env, fetches AI questions, starts quiz. |
| task-2/app.js | Fetches quiz questions from OpenAI Chat Completions and parses JSON. |
| task-2/QuizGame.js | Runs interactive quiz loop, scoring, and output formatting. |
| task-2/package.json | Adds runtime dependencies for the quiz app. |
| task-2/package-lock.json | Adds lockfile for task-2 dependencies. |
| package.json | Adds new root npm package config (currently with failing test script). |
| package-lock.json | Adds new root lockfile (vitest and transitive deps). |
| .gitignore | Ignores node_modules/ and .env. |
Files not reviewed (2)
- task-1/package-lock.json: Language not supported
- task-2/package-lock.json: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "chalk": "^5.6.2", | ||
| "dotenv": "^17.3.1", | ||
| "node-fetch": "^3.3.2", | ||
| "readline-sync": "^1.4.10", |
There was a problem hiding this comment.
These dependencies (chalk/dotenv/node-fetch/readline-sync) are newly added to task-1 but are not used anywhere under task-1/ (the only runtime dependency needed here appears to be vitest for tests). Keeping unused dependencies increases install size and complicates audits; please remove the unused entries.
| "chalk": "^5.6.2", | |
| "dotenv": "^17.3.1", | |
| "node-fetch": "^3.3.2", | |
| "readline-sync": "^1.4.10", |
| model: 'gpt-4o-mini', | ||
| messages: [ | ||
| { role: 'system', content: 'You are a quiz generator. Output ONLY JSON.' }, | ||
| { | ||
| role: 'user', | ||
| content: `Generate 10 multiple-choice questions about ${topic}. Return a JSON object with a "questions" key.`, | ||
| }, | ||
| ], | ||
| response_format: { type: 'json_object' }, |
There was a problem hiding this comment.
fetchQuizQuestions returns content.questions but the prompt doesn't specify the shape that QuizGame relies on (e.g., question, answers array, correctIndex). As written, the model can legally return questions without correctIndex/answers, which will make QuizGame.start() mis-score or crash. Define an explicit JSON schema in the prompt (field names/types, answer count, correct index range) and validate the parsed JSON before returning it.
There was a problem hiding this comment.
This comment from GitHub Copilot is correct. You should tell the LLM exactly what the JSON object should look like. For instance, if I try and run your code I get this response:
[
{
question: "What is the capital city of Australia?",
options: [
"Sydney",
"Melbourne",
"Canberra",
"Brisbane",
],
answer: "Canberra",
},
...
}Your code is expecting a property answers but the LLM returns a property options. Consequently, you will get a runtime error when trying to call answers.forEach() in the QuizGame class.
A possible solution is to use a chatbot, for instance ChatGPT, to suggest a good prompt to give an LLM. For instance, you could ask ChatGPT this:
What is a good prompt to give an LLM for the following assignment:
Build a quiz app where an LLM generates the questions. The player sees one question at a time with four possible answers. They type the correct answer (1, 2, 3, or 4), and the application shows whether they answered correctly.
- A correct answer message will be shown in green and award the player 1 point.
- A wrong answer message will be shown in red, mention the correct answer, and award no points.
The quiz will end after 10 questions and show the final score at the end.
Notice that I just copied and pasted the text from the assignment description in Notion.
ChapGPT suggested this prompt:
You are a quiz generator.
Generate exactly 10 multiple-choice quiz questions.
Requirements:
- Each question must have:
- A question
- 4 answer options
- Exactly 1 correct answer
- Questions should be clear, concise, and suitable for a general audience
- Avoid ambiguity or trick questions
- Do not repeat questions
Output format (STRICT JSON):
[
{
"question": "Question text here",
"options": [
"Option 1",
"Option 2",
"Option 3",
"Option 4"
],
"correctAnswer": 1
}
]
Rules:
- "correctAnswer" must be a number (1–4) corresponding to the correct option
- Do not include explanations
- Do not include any text outside the JSON
Now you are telling the LLM exactly what the JSON object should look like, with the property names that your app expects.
Before using it in your code, your can try the prompt in the GitHub Models Playground and tweak it a bit, if necessary to obtain the JSON output that you can work with.
| const apiKey = process.env.OPENAI_API_KEY; | ||
| const url = 'https://api.openai.com/v1/chat/completions'; | ||
|
|
||
| try { | ||
| const response = await fetch(url, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| Authorization: `Bearer ${apiKey}`, | ||
| }, | ||
| body: JSON.stringify({ | ||
| model: 'gpt-4o-mini', | ||
| messages: [ | ||
| { role: 'system', content: 'You are a quiz generator. Output ONLY JSON.' }, | ||
| { | ||
| role: 'user', | ||
| content: `Generate 10 multiple-choice questions about ${topic}. Return a JSON object with a "questions" key.`, | ||
| }, | ||
| ], | ||
| response_format: { type: 'json_object' }, | ||
| }), | ||
| }); | ||
|
|
||
| const data = await response.json(); | ||
| if (data.error) throw new Error(data.error.message); | ||
|
|
||
| const content = JSON.parse(data.choices[0].message.content); | ||
| return content.questions; |
There was a problem hiding this comment.
The request is sent even when OPENAI_API_KEY is missing/empty, and the code doesn't check response.ok before reading/parsing the body. This can lead to confusing errors (or data.choices[0] access errors) when the API returns non-2xx responses. Add an early check for a missing API key and handle non-OK HTTP statuses explicitly (including status code/body) before attempting to parse choices.
This comment has been minimized.
This comment has been minimized.
📝 HackYourFuture auto gradeAssignment Score: 0 / 100 ✅Status: ✅ Passed Test Details |
remarcmij
left a comment
There was a problem hiding this comment.
HI @Unlock7, Task-1 passes all the tests. Note the comment from GitHub Copilot about the dead code. These methods calls serve no purpose.
Task 2 does can not work as is. You have used an incorrect url for the API. Also, your LLM prompt is not specific enough. You can't assume that it will automatically provide a JSON object with the property names your code expect.s See my comments below.
Take note of the other GitHub Pilot comments too and try to address them.
Good luck!
|
|
||
| export async function fetchQuizQuestions(topic = 'Capital Cities') { | ||
| try { | ||
| const response = await fetch('https://api.openai.com/v1/chat/completions', { |
There was a problem hiding this comment.
This is url for the models from OpenAI, not the GitHub models. The GitHub Personal Access token does not work on the (paid) models provided by (the company) OpenAI. You will always get an "Invalid API key" response with this url. The correct url is shown below. It is the same url that was mentioned in Notion.
| const response = await fetch('https://api.openai.com/v1/chat/completions', { | |
| const response = await fetch('https://models.github.ai/inference/chat/completions', { |
| model: 'gpt-4o-mini', | ||
| messages: [ | ||
| { role: 'system', content: 'You are a quiz generator. Output ONLY JSON.' }, | ||
| { | ||
| role: 'user', | ||
| content: `Generate 10 multiple-choice questions about ${topic}. Return a JSON object with a "questions" key.`, | ||
| }, | ||
| ], | ||
| response_format: { type: 'json_object' }, |
There was a problem hiding this comment.
This comment from GitHub Copilot is correct. You should tell the LLM exactly what the JSON object should look like. For instance, if I try and run your code I get this response:
[
{
question: "What is the capital city of Australia?",
options: [
"Sydney",
"Melbourne",
"Canberra",
"Brisbane",
],
answer: "Canberra",
},
...
}Your code is expecting a property answers but the LLM returns a property options. Consequently, you will get a runtime error when trying to call answers.forEach() in the QuizGame class.
A possible solution is to use a chatbot, for instance ChatGPT, to suggest a good prompt to give an LLM. For instance, you could ask ChatGPT this:
What is a good prompt to give an LLM for the following assignment:
Build a quiz app where an LLM generates the questions. The player sees one question at a time with four possible answers. They type the correct answer (1, 2, 3, or 4), and the application shows whether they answered correctly.
- A correct answer message will be shown in green and award the player 1 point.
- A wrong answer message will be shown in red, mention the correct answer, and award no points.
The quiz will end after 10 questions and show the final score at the end.
Notice that I just copied and pasted the text from the assignment description in Notion.
ChapGPT suggested this prompt:
You are a quiz generator.
Generate exactly 10 multiple-choice quiz questions.
Requirements:
- Each question must have:
- A question
- 4 answer options
- Exactly 1 correct answer
- Questions should be clear, concise, and suitable for a general audience
- Avoid ambiguity or trick questions
- Do not repeat questions
Output format (STRICT JSON):
[
{
"question": "Question text here",
"options": [
"Option 1",
"Option 2",
"Option 3",
"Option 4"
],
"correctAnswer": 1
}
]
Rules:
- "correctAnswer" must be a number (1–4) corresponding to the correct option
- Do not include explanations
- Do not include any text outside the JSON
Now you are telling the LLM exactly what the JSON object should look like, with the property names that your app expects.
Before using it in your code, your can try the prompt in the GitHub Models Playground and tweak it a bit, if necessary to obtain the JSON output that you can work with.
| @@ -0,0 +1,31 @@ | |||
| import fetch from 'node-fetch'; | |||
There was a problem hiding this comment.
There is no need to use this npm package. Modern NodeJS support fetch natively.
|
@remarcmij |
feat: complete Task 1 (Time class) and Task 2 (AI Quiz Game)