VS Code for CS50's Introduction to Artificial Intelligence with Python
CS50's Introduction to Artificial Intelligence with Python is a nice course with problems for each section.
The course creators have been nice enough to provide most of the scaffolding code and expect students to only write some of the core functions.
They've even got free GitHub Codespace for students of the course.
Checking for correctness of the solution uses a program, check50, that tests functions coded by students and it comes installed in the Codespace. While I could use it there, it would help to know exactly what test scenario was it that failed. These tests are also helpfully published and going through them I realized they're better defined as pytests.
I've been an old school terminal + vim setup person for two decades and I wanted to try out Visual Studio Code for Python and found the experience surprisingly pleasant. Its support for Python, its debugger, virtual environment and Pytest is very good.
Aside - you aren't to use Github Copilot in the course.
The following are my steps for a problem.
Structure
ξΏ cs50_pagerank
βββ ξΏ data
β βββ ξΏ corpus0
β β βββ ο» 1.html
β βββ ξΏ corpus1
β βββ ο» bfs.html
βββ ξ pyproject.toml
βββ σ°Ί README.md
βββ ξΏ src
β βββ ξ __init__.py
β βββ ξ pagerank.py
βββ ξΏ tests
β βββ ξ test_pagerank.py
βββ ο£ uv.lock
cd dev/py
My projects are always here.uv init --app cs50_problem_name
UV by Astral is very nice and you should watch ArjanCodes' video on it.
uv init
initializes the project with apyproject.toml
file,.gitignore
and a hello world python file, which we'll now delete.rm main.py
We don't need no lorem ipsum!mkdir src tests .vscode
Essential directories for reasonable code organizationMove the code provided by the problem into
src
.touch src/__init__.py
For cleaner imports.
Packages
uv add --dev pytest
UV creates a virtual env in .venv (which wasn't there; alternatively you could do auv venv
to create it), installs pytest (only for development, it won't be installed if someone just runsuv sync
for themselves).If the problem needs a package to be installed, use
uv add <packagename>
(e.g.,uv add Pillow
) and if they've given arequirements.txt
file, useuv add -r requirements.txt
.
This will make therequirements.txt
redundant becausepyproject.toml
anduv.lock
would cover requirements. You can deleterequirements.txt
.
Settings
In
pyproject.toml
, add this section.
This tells Pytest where can find the source and test directories.Open a VS Code terminal and
git config --local user.name "<username>"
andgit config --local user.email "<email>"
This sets upuser.name
anduser.email
for git if you don't have/want global settings.Add the contents of this section into
.vscode/settings.json
.
This will add colour to pytest output, use the venv in VS Code's terminal, etc.
Tests
Add the tests for the corresponding problem which are published by renaming functions with a "test_" prefix (so pytest can find them), using
assert
s andpytest_check
functions (if you have more than on check). See this section for an example.
You would need touv add --dev pytest_check
to usepytest_check
.
Also, if you need to download a dataset (like this one), use this by pasting the full path to the (data) directory.If the program needs command line arguments, you can add it to
launch.json
like so.
Code
- Write the code for the functions the student is expected to complete and use VS Code to run tests.
It's nice that each test can be run in debug mode so you can look at parameters passed, call stack, and variables as they get modified.
pyproject.toml
[tool.pytest.ini_options]
pythonpath = "src"
testpaths = ["tests"]
settings.json
{
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"--color=auto"
],
"python.testing.cwd": "${workspaceFolder}/tests",
"python.testing.autoTestDiscoverOnSaveEnabled": true,
"python.testing.pytestPath": "${workspaceFolder}/.venv/bin/pytest",
"python.analysis.extraPaths": [
"${workspaceFolder}/src"
],
"python.analysis.autoSearchPaths": true,
"python.terminal.activateEnvironment": true,
"[python]": {
"editor.inlayHints.enabled": "off"
}
}
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File with Arguments",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
// "args": "${command:pickArgs}"
"args": [
"data/corpus1"
]
}
]
}
test_pagerank.py
# Based on https://github.com/ai50/projects/blob/2024/x/pagerank/__init__.py
import logging
import pytest_check as check
import pagerank as pr
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
CORPORA = [
# 0: simple
{"1": {"2"}, "2": {"1", "3"}, "3": {"2", "4"}, "4": {"2"}},
# 1: slightly more involved
{
"1": {"2", "3"},
"2": {"1", "3", "4"},
"3": {"4", "5"},
"4": {"1", "2", "3", "6"},
"5": {"3"},
"6": {"1", "2", "3"},
},
# 2: disjoint
{
"1": {"2"},
"2": {"1", "3"},
"3": {"2", "4"},
"4": {"2"},
"5": {"6"},
"6": {"5", "7"},
"7": {"6", "8"},
"8": {"6"},
},
# 3: no links
{"1": {"2"}, "2": {"1", "3"}, "3": {"2", "4", "5"}, "4": {"1", "2"}, "5": set()},
]
RANKS = [
# 0: simple
{"1": 0.21991, "2": 0.42921, "3": 0.21991, "4": 0.13096},
# 1: slightly more involved
{
"1": 0.12538,
"2": 0.13922,
"3": 0.31297,
"4": 0.19746,
"5": 0.15801,
"6": 0.06696,
},
# 2: disjoint
{
"1": 0.10996,
"2": 0.21461,
"3": 0.10996,
"4": 0.06548,
"5": 0.10996,
"6": 0.21461,
"7": 0.10996,
"8": 0.06548,
},
# 3: no links
{"1": 0.24178, "2": 0.35320, "3": 0.19773, "4": 0.10364, "5": 0.10364},
]
# ranks for just corpus 0 with damping factor 0.60
RANK_0_60 = {"1": 0.21893, "2": 0.39645, "3": 0.21893, "4": 0.16568}
def assert_within(actual, expected, tolerance, name="value"):
lower = expected - tolerance
upper = expected + tolerance
assert lower <= actual <= upper, (
f"expected {name} to be in range [{lower}, {upper}], got {actual} instead"
)
def assert_distribution_within(actual, expected, tolerance):
for value in expected:
check.is_in(value, actual, f"no pagerank found for page {value}")
assert_within(
actual[value], expected[value], tolerance, name=f"pagerank {value}"
)
def log_corpus(corpus, damping):
logging.info(f"testing on corpus {corpus} with damping factor {damping}...")
def test_sample0():
"""sample_pagerank returns correct results for simple corpus"""
damping = 0.85
corpus = CORPORA[0].copy()
expected = RANKS[0]
tolerance = 0.05
log_corpus(corpus, damping)
actual = pr.sample_pagerank(corpus, damping, 10000)
assert_distribution_within(actual, expected, tolerance)
def test_sample1():
"""sample_pagerank returns correct results for complex corpus"""
damping = 0.85
corpus = CORPORA[1].copy()
expected = RANKS[1]
tolerance = 0.05
log_corpus(corpus, damping)
actual = pr.sample_pagerank(corpus, damping, 10000)
assert_distribution_within(actual, expected, tolerance)
Published 15 May, 2025.
Last edited 2Β months, 1Β week ago.