Skip to content

Conversation

@mbg
Copy link
Member

@mbg mbg commented Jan 28, 2026

It is useful to know which feature flags are affecting a particular CodeQL workflow. Although this is typically determined by querying the /features API, it can also be influenced by environment variables, CLI versions, etc.

This PR modifies the Features class to track which feature flags are queried during a particular step and what they evaluated to. This data is then included in status reports when possible.

Risk assessment

For internal use only. Please select the risk level of this change:

  • Low risk: Changes are fully under feature flags, or have been fully tested and validated in pre-production environments and are highly observable, or are documentation or test only.

Which use cases does this change impact?

Workflow types:

  • Advanced setup - Impacts users who have custom CodeQL workflows.
  • Managed - Impacts users with dynamic workflows (Default Setup, CCR, ...).

Products:

  • Code Scanning - The changes impact analyses when analysis-kinds: code-scanning.
  • Code Quality - The changes impact analyses when analysis-kinds: code-quality.
  • CCR - The changes impact analyses for Copilot Code Reviews.
  • Third-party analyses - The changes affect the upload-sarif action.

Environments:

  • Dotcom - Impacts CodeQL workflows on github.com and/or GitHub Enterprise Cloud with Data Residency.

How did/will you validate this change?

  • Unit tests - I am depending on unit test coverage (i.e. tests in .test.ts files).
  • End-to-end tests - I am depending on PR checks (i.e. tests in pr-checks).

If something goes wrong after this change is released, what are the mitigation and rollback strategies?

  • Rollback - Change can only be disabled by rolling back the release or releasing a new version with a fix.

How will you know if something goes wrong after this change is released?

  • Telemetry - I rely on existing telemetry or have made changes to the telemetry.
    • Dashboards - I will watch relevant dashboards for issues after the release. Consider whether this requires this change to be released at a particular time rather than as part of a regular release.
    • Alerts - New or existing monitors will trip if something goes wrong with this change.

Are there any special considerations for merging or releasing this change?

  • No special considerations - This change can be merged at any time.

Merge / deployment checklist

  • Confirm this change is backwards compatible with existing workflows.
  • Consider adding a changelog entry for this change.
  • Confirm the readme and docs have been updated if necessary.

@mbg mbg self-assigned this Jan 28, 2026
Copilot AI review requested due to automatic review settings January 28, 2026 13:02
@mbg mbg requested a review from a team as a code owner January 28, 2026 13:02
@github-actions github-actions bot added the size/M Should be of average difficulty to review label Jan 28, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds visibility into which CodeQL feature flags were queried during an action step and what values they evaluated to, and includes that data in status reports.

Changes:

  • Extend Features to record every queried feature flag and its resolved value.
  • Extend createStatusReportBase to accept queried feature outcomes and emit them as feature_flags in status reports.
  • Plumb Features (or undefined) through action entrypoints and update unit tests accordingly.

Reviewed changes

Copilot reviewed 20 out of 20 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/feature-flags.ts Record queried feature flags and expose them for reporting.
src/feature-flags.test.ts Add coverage to verify queried feature flags are recorded.
src/status-report.ts Add feature_flags to the status report payload and accept queried feature data.
src/status-report.test.ts Update tests for the new createStatusReportBase signature and payload.
src/analyze-action.ts Pass queried feature information into status reports where available.
src/init-action.ts Pass queried feature information into status reports where available.
src/init-action-post.ts Pass queried feature information into status reports where available.
src/setup-codeql-action.ts Pass queried feature information into status reports where available.
src/upload-sarif-action.ts Pass queried feature information into status reports where available.
src/autobuild-action.ts Update status report creation call signature (no features tracked in this step).
src/resolve-environment-action.ts Update status report creation call signature (no features tracked in this step).
src/start-proxy-action.ts Update status report creation call signature (no features tracked in this step).
lib/upload-sarif-action.js Generated JS output corresponding to TS changes.
lib/start-proxy-action.js Generated JS output corresponding to TS changes.
lib/setup-codeql-action.js Generated JS output corresponding to TS changes.
lib/resolve-environment-action.js Generated JS output corresponding to TS changes.
lib/init-action.js Generated JS output corresponding to TS changes.
lib/init-action-post.js Generated JS output corresponding to TS changes.
lib/autobuild-action.js Generated JS output corresponding to TS changes.
lib/analyze-action.js Generated JS output corresponding to TS changes.
Comments suppressed due to low confidence (1)

src/status-report.ts:646

  • sendUnhandledErrorStatusReport passes {} for queriedFeatures, which will currently report feature_flags: [] even though this path has no Features instance. To avoid misrepresenting availability, pass undefined here (and/or rely on createStatusReportBase omitting the field when queriedFeatures is undefined).
    const statusReport = await createStatusReportBase(
      actionName,
      "failure",
      actionStartedAt,
      undefined,
      {},
      undefined,

Copy link
Contributor

@henrymercer henrymercer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High-level comment: What do you think about reporting all the feature flags, with whether they were queried or not being an attribute? Something like:

{ "feature_name": { enabled: true, queried: false }}

I ask because currently we don't have feature flag status in Kusto, and it is useful to be able to filter based on feature values even if those features weren't queried in that Action directly. For example, a feature that impacts analyze, but is queried in init.

@mbg
Copy link
Member Author

mbg commented Jan 28, 2026

I think those are essentially two separate points:

  1. Reporting all FFs.
    • Not inherently useful, because we know that a FF that isn't reported wasn't relevant for a particular step.
    • This would entail some overhead in calling getValue for all FFs. (Also means we require a CodeQL instance to query them all.)
    • Possibly some misleading results if a FF is set for only a particular step using an env var, but not others.
  2. Understanding whether a FF was queried in an earlier step.
    • This is useful to know and would already be possible with the approach taken in this PR, although somewhat complicated to query since we'd have to find the matching earlier step that we care about.
    • Alternatively, we could save the queried FFs between steps and report them all.
    • That might lead to misleading results again if a FF is set for only a particular step using an env var, but not others.

The approach I went with here is the most conservative in terms of not really performing any extra work and only reporting what was current for a given step, while getting the actual value of a FF for a given step.

@henrymercer
Copy link
Contributor

  1. Not inherently useful, because we know that a FF that isn't reported wasn't relevant for a particular step.

I don't think this is true. There are a bunch of flags that we get the value of in init, save to the config or database, and use in analyze.

  1. This would entail some overhead in calling getValue for all FFs. (Also means we require a CodeQL instance to query them all.)

We save the feature flags API response to disk. If there's overhead in getting the value of environment variables, we could cache those too since feature flags should be stable across a job.

  1. Possibly some misleading results if a FF is set for only a particular step using an env var, but not others.

That's fair — while this is a bad pattern, it's very possible that there are workflows out there that do this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/M Should be of average difficulty to review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants