Skip to content

Add SqueezeNet Fire Module #585

Open
syed-nazmus-sakib wants to merge 2 commits intoOpen-Deep-ML:mainfrom
syed-nazmus-sakib:add-squeezenet-fire-module
Open

Add SqueezeNet Fire Module #585
syed-nazmus-sakib wants to merge 2 commits intoOpen-Deep-ML:mainfrom
syed-nazmus-sakib:add-squeezenet-fire-module

Conversation

@syed-nazmus-sakib
Copy link

Description

This PR implements Problem 190: SqueezeNet Fire Module, which teaches how to use 1x1 convolutions ("squeeze" layer) to reduce dimensionality before applying expensive 3x3 convolutions ("expand" layer).

Changes

  • Added questions/190_implement-squeezenet-fire-module/ with all required files.
  • Includes meta.json, description.md, learn.md, starter_code.py, solution.py, example.json, and tests.json.
  • (Redundant) Includes utils/build_bundle.py fix for Windows encoding to ensure this branch passes CI independently.
  • Validated with utils/validate_questions.py.

Checklist

  • Question follows template structure
  • Solution passes all tests
  • Build script runs successfully

Copy link
Collaborator

@moe18 moe18 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, had a few ideas for the test cases

@@ -0,0 +1,5 @@
{
"input": "imput_tensor: (H=32, W=32, C_in=3)\nSqueeze 1x1: s1x1=16 filters\nExpand 1x1: e1x1=64 filters\nExpand 3x3: e3x3=64 filters",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be input_tensor (small spelling issue)

{
"test": "import numpy as np\n# Middle pixel full sum\n# Same setup as above, check middle pixel (1,1)\n# 9 neighbors are 1s -> Sum = 9\ninput = np.ones((3, 3, 1))\ns_w = np.ones((1, 1, 1, 1))\ns_b = np.zeros(1)\ne1_w = np.zeros((1, 1, 1, 1))\ne1_b = np.zeros(1)\ne3_w = np.ones((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\nprint(res[1, 1, 1])",
"expected_output": "9.0"
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a test for Non-Zero Bias

},
{
"test": "import numpy as np\n# Middle pixel full sum\n# Same setup as above, check middle pixel (1,1)\n# 9 neighbors are 1s -> Sum = 9\ninput = np.ones((3, 3, 1))\ns_w = np.ones((1, 1, 1, 1))\ns_b = np.zeros(1)\ne1_w = np.zeros((1, 1, 1, 1))\ne1_b = np.zeros(1)\ne3_w = np.ones((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\nprint(res[1, 1, 1])",
"expected_output": "9.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReLU Is Never Actually Tested in any of the test cases

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review! I've pushed the updates. imput_tensor is now input_tensor, and I've added non-zero bias and ReLU edge-case tests to test.json file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants