Skip to content

Conversation

@tanish111
Copy link

Description

This work introduces a new endpoint in CodeCarbon that enables users to run code remotely and obtain carbon emission estimates. The implementation leverages Kaggle’s open APIs to push user code to their Kaggle environment, execute it remotely, and store the resulting carbon metrics directly in the user’s CodeCarbon dashboard.
By building on Kaggle’s infrastructure, this approach benefits from Kaggle’s free-tier GPU access, allowing users to generate carbon estimates without the need for expensive local or cloud GPU resources.
This endpoint is designed as a foundational step toward the future development of the CodeCarbon MCP server, which will require reliable remote code execution capabilities to support advanced carbon-aware workflows.

Related Issue

CodeCarbon MCP Development

Motivation and Context

AI agents, particularly MCP servers, increasingly need to run and test code autonomously. Today, this often relies on user-managed machines, leading to inconsistent environments, limited reproducibility, and higher operational overhead.
A standardized remote execution mechanism allows agents to run code independently of the user’s host system, ensuring consistent, isolated, and reproducible execution. By leveraging managed platforms with open APIs, agents can execute workloads, collect results, and report metrics without requiring users to provision infrastructure.
For CodeCarbon, this capability is critical to support future MCP server development, enabling autonomous remote execution and carbon estimation while reducing cost and complexity for users.

How Has This Been Tested?

Testing is yet to be done. I have tested it locally and have attached a video preview.

Screenshots (if appropriate):

Video-Preview:-
https://github.com/user-attachments/assets/82ac3bc9-3bde-47ef-9edd-9bd64240e161

Types of changes

What types of changes does your code introduce? Put an x in all the boxes that apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

Go over all the following points, and put an x in all the boxes that apply.

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING.md document.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

@tanish111 tanish111 requested a review from a team as a code owner January 18, 2026 17:44
Copy link
Member

@SaboniAmine SaboniAmine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello Tanish, thanks for your contribution!
Enabling remote code execution on the public API isn't something that we will allow, even if it's just to publish a Kaggle script. Please submit another design (the MCP discord channel might be the correct place to discuss it) that doesn't run on the server which manages user data.

@tanish111
Copy link
Author

Hello Tanish, thanks for your contribution!
Enabling remote code execution on the public API isn't something that we will allow, even if it's just to publish a Kaggle script.

Just to clarify the intent and current design: this approach does not allow arbitrary remote code execution on a public endpoint. Execution is only possible when the user explicitly provides both their CodeCarbon API key and their personal Kaggle API credentials. Without these, no code can be run.
In practice, CodeCarbon itself is not executing user code. The code is forwarded to Kaggle’s existing execution infrastructure via their public APIs, which run the workload on behalf of the user under their own account. From our side, this is effectively a relay to an already sandboxed environment rather than a new execution surface.
This idea originally came out of discussions in the MCP group, and I fully agree that the current proposal would need refinement and safeguards. I’m very open to making changes to better align with CodeCarbon’s security and API boundaries.
That said, some form of remote execution feels important for the broader goal of enabling users to explore and compare design choices from an energy and carbon perspective especially when local or GPU resources are a limiting factor. This proposal was meant as an initial step toward lowering that barrier, not as a final design.
Happy to discuss alternative architectures or constraints that could make this acceptable while still supporting that goal.

Please submit another design (the MCP discord channel might be the correct place to discuss it) that doesn't run on the server which manages user data.

The current design does not run code on the server that manages user data in any form. It can be modeled as rerouting user-provided data (with code treated as part of that data) to existing external infrastructure to obtain execution insights.
I’ll submit an updated design reflecting this framing and continue the discussion in the MCP Discord channel as suggested.

@benoit-cty benoit-cty changed the title New Endpoint Added :) [MCP Draft] Prototype of remote code execution Jan 18, 2026
@benoit-cty benoit-cty changed the title [MCP Draft] Prototype of remote code execution [MCP][Draft] Prototype of remote code execution Jan 18, 2026
@benoit-cty
Copy link
Contributor

Hi @tanish111, thanks for the contribution.
I will think about the best architecture with Amine.
At least we need to have a separate sub-project for this MCP, to allow it to be run locally for example so the user don't expose his API key to our server, in a zero thrust design.

@tanish111
Copy link
Author

@benoit-cty @SaboniAmine I will work on a local implementation this week and get back to you.
Thanks for comments

@SaboniAmine
Copy link
Member

Hey Tanish, thanks for your understanding.
Limitations to include this in the user data's collection API server are :

  • we should not enable files to be written in the server instance, even in /tmp, if they are user-provided
  • injector service seems to have limited exposure to remote code execution, however manipulating kernels and execution contexte doesn't seem appropriate to run on this server. BTW, @inimaz is also iterating on code carbon injection in jupyter kernels, maybe there is a way to mutualize the implementation
  • To read run results, if data is saved in the API, requires an OAuth token, which shouldn't be leaked to the agent running the orchestration, as it enables access to more critical data

I think that running this from a different server, potentially with a different set of credentials, could solve those limitations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants