Skip to content

Dagim H.#13

Open
Unlock7 wants to merge 2 commits intoHackYourAssignment:mainfrom
Unlock7:main
Open

Dagim H.#13
Unlock7 wants to merge 2 commits intoHackYourAssignment:mainfrom
Unlock7:main

Conversation

@Unlock7
Copy link
Copy Markdown

@Unlock7 Unlock7 commented Mar 25, 2026

feat: complete Task 1 (Time class) and Task 2 (AI Quiz Game)

Copilot AI review requested due to automatic review settings March 25, 2026 21:54
@github-actions

This comment has been minimized.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Implements the Week 11 assignment tasks by adding a Time class (Task 1) and a CLI-based AI quiz generator/game (Task 2), along with supporting npm configuration and ignores.

Changes:

  • Implemented Time class with getters, add* methods, and formatting.
  • Added AI quiz game CLI flow (fetch questions from OpenAI, run interactive quiz).
  • Introduced multiple package.json / lockfiles plus .gitignore updates for Node dependencies and .env.

Reviewed changes

Copilot reviewed 8 out of 12 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
task-1/Time.js Implements Time class logic (internal seconds storage, math, formatting).
task-1/index.js Adds manual demo usage of Time APIs.
task-1/package.json Adds additional dependencies (currently unused in task-1).
task-1/package-lock.json Updates lockfile to reflect dependency changes.
task-2/index.js CLI entrypoint: loads env, fetches AI questions, starts quiz.
task-2/app.js Fetches quiz questions from OpenAI Chat Completions and parses JSON.
task-2/QuizGame.js Runs interactive quiz loop, scoring, and output formatting.
task-2/package.json Adds runtime dependencies for the quiz app.
task-2/package-lock.json Adds lockfile for task-2 dependencies.
package.json Adds new root npm package config (currently with failing test script).
package-lock.json Adds new root lockfile (vitest and transitive deps).
.gitignore Ignores node_modules/ and .env.
Files not reviewed (2)
  • task-1/package-lock.json: Language not supported
  • task-2/package-lock.json: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread task-2/QuizGame.js
Comment thread task-1/index.js
Comment thread task-1/Time.js
Comment thread task-2/package.json
Comment thread task-1/package.json
Comment on lines +17 to +20
"chalk": "^5.6.2",
"dotenv": "^17.3.1",
"node-fetch": "^3.3.2",
"readline-sync": "^1.4.10",
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These dependencies (chalk/dotenv/node-fetch/readline-sync) are newly added to task-1 but are not used anywhere under task-1/ (the only runtime dependency needed here appears to be vitest for tests). Keeping unused dependencies increases install size and complicates audits; please remove the unused entries.

Suggested change
"chalk": "^5.6.2",
"dotenv": "^17.3.1",
"node-fetch": "^3.3.2",
"readline-sync": "^1.4.10",

Copilot uses AI. Check for mistakes.
Comment thread package.json
Comment thread task-2/app.js
Comment on lines +15 to +23
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are a quiz generator. Output ONLY JSON.' },
{
role: 'user',
content: `Generate 10 multiple-choice questions about ${topic}. Return a JSON object with a "questions" key.`,
},
],
response_format: { type: 'json_object' },
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetchQuizQuestions returns content.questions but the prompt doesn't specify the shape that QuizGame relies on (e.g., question, answers array, correctIndex). As written, the model can legally return questions without correctIndex/answers, which will make QuizGame.start() mis-score or crash. Define an explicit JSON schema in the prompt (field names/types, answer count, correct index range) and validate the parsed JSON before returning it.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment from GitHub Copilot is correct. You should tell the LLM exactly what the JSON object should look like. For instance, if I try and run your code I get this response:

[
  {
    question: "What is the capital city of Australia?",
    options: [
      "Sydney",
      "Melbourne",
      "Canberra",
      "Brisbane",
    ],
    answer: "Canberra",
  },
  ...
}

Your code is expecting a property answers but the LLM returns a property options. Consequently, you will get a runtime error when trying to call answers.forEach() in the QuizGame class.

A possible solution is to use a chatbot, for instance ChatGPT, to suggest a good prompt to give an LLM. For instance, you could ask ChatGPT this:


What is a good prompt to give an LLM for the following assignment:

Build a quiz app where an LLM generates the questions. The player sees one question at a time with four possible answers. They type the correct answer (1, 2, 3, or 4), and the application shows whether they answered correctly.

  • A correct answer message will be shown in green and award the player 1 point.
  • A wrong answer message will be shown in red, mention the correct answer, and award no points.

The quiz will end after 10 questions and show the final score at the end.


Notice that I just copied and pasted the text from the assignment description in Notion.

ChapGPT suggested this prompt:

You are a quiz generator.

Generate exactly 10 multiple-choice quiz questions.

Requirements:
- Each question must have:
  - A question
  - 4 answer options
  - Exactly 1 correct answer
- Questions should be clear, concise, and suitable for a general audience
- Avoid ambiguity or trick questions
- Do not repeat questions

Output format (STRICT JSON):

[
  {
    "question": "Question text here",
    "options": [
      "Option 1",
      "Option 2",
      "Option 3",
      "Option 4"
    ],
    "correctAnswer": 1
  }
]

Rules:
- "correctAnswer" must be a number (1–4) corresponding to the correct option
- Do not include explanations
- Do not include any text outside the JSON

Now you are telling the LLM exactly what the JSON object should look like, with the property names that your app expects.

Before using it in your code, your can try the prompt in the GitHub Models Playground and tweak it a bit, if necessary to obtain the JSON output that you can work with.

Comment thread task-2/app.js Outdated
Comment on lines +4 to +31
const apiKey = process.env.OPENAI_API_KEY;
const url = 'https://api.openai.com/v1/chat/completions';

try {
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are a quiz generator. Output ONLY JSON.' },
{
role: 'user',
content: `Generate 10 multiple-choice questions about ${topic}. Return a JSON object with a "questions" key.`,
},
],
response_format: { type: 'json_object' },
}),
});

const data = await response.json();
if (data.error) throw new Error(data.error.message);

const content = JSON.parse(data.choices[0].message.content);
return content.questions;
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The request is sent even when OPENAI_API_KEY is missing/empty, and the code doesn't check response.ok before reading/parsing the body. This can lead to confusing errors (or data.choices[0] access errors) when the API returns non-2xx responses. Add an early check for a missing API key and handle non-OK HTTP statuses explicitly (including status code/body) before attempting to parse choices.

Copilot uses AI. Check for mistakes.
@remarcmij remarcmij self-assigned this Mar 25, 2026
@github-actions

This comment has been minimized.

@Unlock7 Unlock7 closed this Mar 25, 2026
@Unlock7 Unlock7 reopened this Mar 25, 2026
@github-actions
Copy link
Copy Markdown

📝 HackYourFuture auto grade

Assignment Score: 0 / 100 ✅

Status: ✅ Passed
Minimum score to pass: 0
🧪 The auto grade is experimental and still being improved

Test Details

Copy link
Copy Markdown

@remarcmij remarcmij left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HI @Unlock7, Task-1 passes all the tests. Note the comment from GitHub Copilot about the dead code. These methods calls serve no purpose.

Task 2 does can not work as is. You have used an incorrect url for the API. Also, your LLM prompt is not specific enough. You can't assume that it will automatically provide a JSON object with the property names your code expect.s See my comments below.

Take note of the other GitHub Pilot comments too and try to address them.

Good luck!

Comment thread task-2/app.js

export async function fetchQuizQuestions(topic = 'Capital Cities') {
try {
const response = await fetch('https://api.openai.com/v1/chat/completions', {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is url for the models from OpenAI, not the GitHub models. The GitHub Personal Access token does not work on the (paid) models provided by (the company) OpenAI. You will always get an "Invalid API key" response with this url. The correct url is shown below. It is the same url that was mentioned in Notion.

Suggested change
const response = await fetch('https://api.openai.com/v1/chat/completions', {
const response = await fetch('https://models.github.ai/inference/chat/completions', {

Comment thread task-2/app.js
Comment on lines +15 to +23
model: 'gpt-4o-mini',
messages: [
{ role: 'system', content: 'You are a quiz generator. Output ONLY JSON.' },
{
role: 'user',
content: `Generate 10 multiple-choice questions about ${topic}. Return a JSON object with a "questions" key.`,
},
],
response_format: { type: 'json_object' },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment from GitHub Copilot is correct. You should tell the LLM exactly what the JSON object should look like. For instance, if I try and run your code I get this response:

[
  {
    question: "What is the capital city of Australia?",
    options: [
      "Sydney",
      "Melbourne",
      "Canberra",
      "Brisbane",
    ],
    answer: "Canberra",
  },
  ...
}

Your code is expecting a property answers but the LLM returns a property options. Consequently, you will get a runtime error when trying to call answers.forEach() in the QuizGame class.

A possible solution is to use a chatbot, for instance ChatGPT, to suggest a good prompt to give an LLM. For instance, you could ask ChatGPT this:


What is a good prompt to give an LLM for the following assignment:

Build a quiz app where an LLM generates the questions. The player sees one question at a time with four possible answers. They type the correct answer (1, 2, 3, or 4), and the application shows whether they answered correctly.

  • A correct answer message will be shown in green and award the player 1 point.
  • A wrong answer message will be shown in red, mention the correct answer, and award no points.

The quiz will end after 10 questions and show the final score at the end.


Notice that I just copied and pasted the text from the assignment description in Notion.

ChapGPT suggested this prompt:

You are a quiz generator.

Generate exactly 10 multiple-choice quiz questions.

Requirements:
- Each question must have:
  - A question
  - 4 answer options
  - Exactly 1 correct answer
- Questions should be clear, concise, and suitable for a general audience
- Avoid ambiguity or trick questions
- Do not repeat questions

Output format (STRICT JSON):

[
  {
    "question": "Question text here",
    "options": [
      "Option 1",
      "Option 2",
      "Option 3",
      "Option 4"
    ],
    "correctAnswer": 1
  }
]

Rules:
- "correctAnswer" must be a number (1–4) corresponding to the correct option
- Do not include explanations
- Do not include any text outside the JSON

Now you are telling the LLM exactly what the JSON object should look like, with the property names that your app expects.

Before using it in your code, your can try the prompt in the GitHub Models Playground and tweak it a bit, if necessary to obtain the JSON output that you can work with.

Comment thread task-2/app.js
@@ -0,0 +1,31 @@
import fetch from 'node-fetch';
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need to use this npm package. Modern NodeJS support fetch natively.

@remarcmij remarcmij added the Reviewed This assignment has been reivewed by a mentor and a feedback has been provided label Mar 26, 2026
@Unlock7
Copy link
Copy Markdown
Author

Unlock7 commented Mar 26, 2026

@remarcmij
Appreciate the feedback. I’ll review everything and keep improving. If I have any questions, I’ll reach out!
Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Reviewed This assignment has been reivewed by a mentor and a feedback has been provided

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants