Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/instructions/testing-workflow.instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -602,3 +602,7 @@ envConfig.inspect
- Never create "documentation tests" that just `assert.ok(true)` — if mocking limitations prevent testing, either test a different layer that IS mockable, or skip the test entirely with a clear explanation (1)
- When stubbing vscode APIs in tests via wrapper modules (e.g., `workspaceApis`), the production code must also use those wrappers — sinon cannot stub properties directly on the vscode namespace like `workspace.workspaceFolders`, so both production and test code must reference the same stubbable wrapper functions (4)
- **Before writing tests**, check if the function under test calls VS Code APIs directly (e.g., `commands.executeCommand`, `window.createTreeView`, `workspace.getConfiguration`). If so, FIRST update the production code to use wrapper functions from `src/common/*.apis.ts` (create the wrapper if it doesn't exist), THEN write tests that stub those wrappers. This prevents CI failures where sinon cannot stub the vscode namespace (4)
- **Async initialization in `setImmediate` must have error handling**: When extension activation uses `setImmediate(async () => {...})` for async manager registration, wrap it in try-catch with proper error logging. Unhandled errors cause silent failures where managers never register, making smoke/E2E tests hang forever (1)
- **Never skip tests to hide infrastructure problems**: If tests require native binaries (like `pet`), the CI workflow must build/download them. Skipping tests when infrastructure is missing gives false confidence. Build from source (like vscode-python does) rather than skipping. Tests should fail clearly when something is wrong (2)
- **No retries for masking flakiness**: Mocha `retries` should not be used to mask test flakiness. If a test is flaky, fix the root cause. Retries hide real issues and slow down CI (1)
- **pet binary is required for environment manager registration**: The smoke/E2E/integration tests require the `pet` binary from `microsoft/python-environment-tools` to be built and placed in `python-env-tools/bin/`. Without it, `waitForApiReady()` will timeout because managers never register. CI must build pet from source using `cargo build --release --package pet` (2)
129 changes: 129 additions & 0 deletions .github/workflows/pr-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,49 @@ jobs:
- name: Checkout
uses: actions/checkout@v4

- name: Checkout Python Environment Tools
uses: actions/checkout@v4
with:
repository: 'microsoft/python-environment-tools'
path: 'python-env-tools-src'
sparse-checkout: |
crates
Cargo.toml
Cargo.lock
sparse-checkout-cone-mode: false

- name: Install Rust Toolchain
uses: dtolnay/rust-toolchain@stable

- name: Cache Rust build
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
python-env-tools-src/target
key: ${{ runner.os }}-cargo-pet-${{ hashFiles('python-env-tools-src/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-pet-

- name: Build Python Environment Tools
run: cargo build --release --package pet
working-directory: python-env-tools-src

- name: Copy pet binary (Unix)
if: runner.os != 'Windows'
run: |
mkdir -p python-env-tools/bin
cp python-env-tools-src/target/release/pet python-env-tools/bin/
chmod +x python-env-tools/bin/pet

- name: Copy pet binary (Windows)
if: runner.os == 'Windows'
run: |
mkdir -p python-env-tools/bin
cp python-env-tools-src/target/release/pet.exe python-env-tools/bin/
shell: bash

- name: Install Node
uses: actions/setup-node@v4
with:
Expand Down Expand Up @@ -138,6 +181,49 @@ jobs:
- name: Checkout
uses: actions/checkout@v4

- name: Checkout Python Environment Tools
uses: actions/checkout@v4
with:
repository: 'microsoft/python-environment-tools'
path: 'python-env-tools-src'
sparse-checkout: |
crates
Cargo.toml
Cargo.lock
sparse-checkout-cone-mode: false

- name: Install Rust Toolchain
uses: dtolnay/rust-toolchain@stable

- name: Cache Rust build
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
python-env-tools-src/target
key: ${{ runner.os }}-cargo-pet-${{ hashFiles('python-env-tools-src/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-pet-

- name: Build Python Environment Tools
run: cargo build --release --package pet
working-directory: python-env-tools-src

- name: Copy pet binary (Unix)
if: runner.os != 'Windows'
run: |
mkdir -p python-env-tools/bin
cp python-env-tools-src/target/release/pet python-env-tools/bin/
chmod +x python-env-tools/bin/pet

- name: Copy pet binary (Windows)
if: runner.os == 'Windows'
run: |
mkdir -p python-env-tools/bin
cp python-env-tools-src/target/release/pet.exe python-env-tools/bin/
shell: bash

- name: Install Node
uses: actions/setup-node@v4
with:
Expand Down Expand Up @@ -188,6 +274,49 @@ jobs:
- name: Checkout
uses: actions/checkout@v4

- name: Checkout Python Environment Tools
uses: actions/checkout@v4
with:
repository: 'microsoft/python-environment-tools'
path: 'python-env-tools-src'
sparse-checkout: |
crates
Cargo.toml
Cargo.lock
sparse-checkout-cone-mode: false

- name: Install Rust Toolchain
uses: dtolnay/rust-toolchain@stable

- name: Cache Rust build
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
python-env-tools-src/target
key: ${{ runner.os }}-cargo-pet-${{ hashFiles('python-env-tools-src/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-pet-

- name: Build Python Environment Tools
run: cargo build --release --package pet
working-directory: python-env-tools-src

- name: Copy pet binary (Unix)
if: runner.os != 'Windows'
run: |
mkdir -p python-env-tools/bin
cp python-env-tools-src/target/release/pet python-env-tools/bin/
chmod +x python-env-tools/bin/pet

- name: Copy pet binary (Windows)
if: runner.os == 'Windows'
run: |
mkdir -p python-env-tools/bin
cp python-env-tools-src/target/release/pet.exe python-env-tools/bin/
shell: bash

- name: Install Node
uses: actions/setup-node@v4
with:
Expand Down
3 changes: 0 additions & 3 deletions .vscode-test.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ export default defineConfig([
mocha: {
ui: 'tdd',
timeout: 120000,
retries: 1,
},
env: {
VSC_PYTHON_SMOKE_TEST: '1',
Expand All @@ -32,7 +31,6 @@ export default defineConfig([
mocha: {
ui: 'tdd',
timeout: 180000,
retries: 1,
},
env: {
VSC_PYTHON_E2E_TEST: '1',
Expand All @@ -51,7 +49,6 @@ export default defineConfig([
mocha: {
ui: 'tdd',
timeout: 60000,
retries: 1,
},
env: {
VSC_PYTHON_INTEGRATION_TEST: '1',
Expand Down
174 changes: 174 additions & 0 deletions src/test/common/testUtils.unit.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.

import * as assert from 'assert';
import { retryUntilSuccess, sleep, waitForApiReady, waitForCondition } from '../testUtils';

suite('Test Utilities', () => {
suite('sleep', () => {
test('should resolve after specified time', async () => {
const start = Date.now();
await sleep(50);
const elapsed = Date.now() - start;
assert.ok(elapsed >= 45, `Expected at least 45ms, got ${elapsed}ms`);
});
});

suite('waitForCondition', () => {
test('should resolve immediately when condition is true', async () => {
await waitForCondition(() => true, 100, 'Should not fail');
});

test('should resolve when condition becomes true', async () => {
let counter = 0;
await waitForCondition(
() => {
counter++;
return counter >= 3;
},
1000,
'Condition did not become true',
10,
);
assert.ok(counter >= 3, 'Condition should have been checked multiple times');
});

test('should reject when timeout is reached', async () => {
await assert.rejects(
() => waitForCondition(() => false, 100, 'Custom error message', 10),
/Custom error message \(waited 100ms\)/,
);
});

test('should handle async conditions', async () => {
let counter = 0;
await waitForCondition(
async () => {
counter++;
await sleep(5);
return counter >= 2;
},
1000,
'Async condition failed',
10,
);
assert.ok(counter >= 2);
});

test('should continue polling when condition throws', async () => {
let counter = 0;
await waitForCondition(
() => {
counter++;
if (counter < 3) {
throw new Error('Not ready yet');
}
return true;
},
1000,
'Should eventually succeed',
10,
);
assert.ok(counter >= 3);
});
});

suite('retryUntilSuccess', () => {
test('should return result when function succeeds immediately', async () => {
const result = await retryUntilSuccess(
() => 42,
() => true,
1000,
'Should not fail',
);
assert.strictEqual(result, 42);
});

test('should return result when validation passes', async () => {
let counter = 0;
const result = await retryUntilSuccess(
() => {
counter++;
return counter;
},
(val) => val >= 3,
1000,
'Validation failed',
);
assert.ok(result >= 3);
});

test('should reject when timeout reached', async () => {
await assert.rejects(
() =>
retryUntilSuccess(
() => 1,
(val) => val > 10,
100,
'Custom timeout error',
),
/Custom timeout error: validation failed/,
);
});

test('should include last error message in rejection', async () => {
await assert.rejects(
() =>
retryUntilSuccess(
() => {
throw new Error('Specific failure');
},
() => true,
100,
'Operation failed',
),
/Operation failed: Specific failure/,
);
});

test('should handle async functions', async () => {
let counter = 0;
const result = await retryUntilSuccess(
async () => {
counter++;
await sleep(5);
return counter;
},
(val) => val >= 2,
1000,
'Async retry failed',
);
assert.ok(result >= 2);
});
});

suite('waitForApiReady', () => {
test('should return ready:true when getEnvironments succeeds', async () => {
const mockApi = {
getEnvironments: async () => [],
};
const result = await waitForApiReady(mockApi, 1000);
assert.deepStrictEqual(result, { ready: true });
});

test('should return ready:false with error when timeout reached', async () => {
const mockApi = {
getEnvironments: (): Promise<unknown[]> => new Promise(() => {}), // Never resolves
};
const result = await waitForApiReady(mockApi, 100);
assert.strictEqual(result.ready, false);
assert.ok(result.error?.includes('API not ready within 100ms'));
});

test('should return ready:false when getEnvironments throws', async () => {
const mockApi = {
getEnvironments: async () => {
throw new Error('Manager not registered');
},
};
const result = await waitForApiReady(mockApi, 1000);
assert.strictEqual(result.ready, false);
assert.ok(result.error?.includes('Manager not registered'));
});
});
});
1 change: 0 additions & 1 deletion src/test/e2e/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ export async function run(): Promise<void> {
ui: 'tdd',
color: true,
timeout: 180_000, // 3 minutes - E2E workflows can be slow
retries: 1, // Retry once on failure
slow: 30_000, // Mark tests as slow if they take > 30s
});

Expand Down
1 change: 0 additions & 1 deletion src/test/integration/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ export async function run(): Promise<void> {
ui: 'tdd',
color: true,
timeout: 120_000, // 2 minutes
retries: 1,
slow: 15_000, // Mark as slow if > 15s
});

Expand Down
Loading
Loading