Skip to content

Commit 2984423

Browse files
moonbox3crickman
andauthoredFeb 20, 2025··
Python: Add support for AutoGen's 0.2 ConversableAgent (#10607)
### Motivation and Context For those who use AutoGen's 0.2 `ConversableAgent`, we're providing support in Semantic Kernel to run this agent type. This assumes one will port their existing AG 0.2.X `ConversableAgent` code and run in Semantic Kernel. Note: as we move towards GA for our agent chat patterns, we are analyzing how to support AutoGen agents with a shared time. This PR does not provide support for the AG `ConversibleAgent` group chat patterns that exist in the 0.2.X package. <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description Add support and samples for the AG 0.2.X ConversableAgent. - Add unit test coverage. - Add samples and READMEs. - Closes #10407 <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone 😄 --------- Co-authored-by: Chris <[email protected]>

File tree

10 files changed

+712
-66
lines changed

10 files changed

+712
-66
lines changed
 

‎python/pyproject.toml

+3
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,9 @@ dependencies = [
5050

5151
### Optional dependencies
5252
[project.optional-dependencies]
53+
autogen = [
54+
"autogen-agentchat >= 0.2, <0.4"
55+
]
5356
azure = [
5457
"azure-ai-inference >= 1.0.0b6",
5558
"azure-ai-projects >= 1.0.0b5",
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## AutoGen Conversable Agent (v0.2.X)
2+
3+
Semantic Kernel Python supports running AutoGen Conversable Agents provided in the 0.2.X package.
4+
5+
### Limitations
6+
7+
Currently, there are some limitations to note:
8+
9+
- AutoGen Conversable Agents in Semantic Kernel run asynchronously and do not support streaming of agent inputs or responses.
10+
- The `AutoGenConversableAgent` in Semantic Kernel Python cannot be configured as part of a Semantic Kernel `AgentGroupChat`. As we progress towards GA for our agent group chat patterns, we will explore ways to integrate AutoGen agents into a Semantic Kernel group chat scenario.
11+
12+
### Installation
13+
14+
Install the `semantic-kernel` package with the `autogen` extra:
15+
16+
```bash
17+
pip install semantic-kernel[autogen]
18+
```
19+
20+
For an example of how to integrate an AutoGen Conversable Agent using the Semantic Kernel Agent abstraction, please refer to [`autogen_conversable_agent_simple_convo.py`](autogen_conversable_agent_simple_convo.py).
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
import asyncio
4+
5+
from autogen import ConversableAgent
6+
from autogen.coding import LocalCommandLineCodeExecutor
7+
8+
from semantic_kernel.agents.autogen.autogen_conversable_agent import AutoGenConversableAgent
9+
10+
"""
11+
The following sample demonstrates how to use the AutoGenConversableAgent to create a reply from an agent
12+
to a message with a code block. The agent executes the code block and replies with the output.
13+
14+
The sample follows the AutoGen flow outlined here:
15+
https://microsoft.github.io/autogen/0.2/docs/tutorial/code-executors#local-execution
16+
"""
17+
18+
19+
async def main():
20+
# Create a temporary directory to store the code files.
21+
import os
22+
23+
# Configure the temporary directory to be where the script is located.
24+
temp_dir = os.path.dirname(os.path.realpath(__file__))
25+
26+
# Create a local command line code executor.
27+
executor = LocalCommandLineCodeExecutor(
28+
timeout=10, # Timeout for each code execution in seconds.
29+
work_dir=temp_dir, # Use the temporary directory to store the code files.
30+
)
31+
32+
# Create an agent with code executor configuration.
33+
code_executor_agent = ConversableAgent(
34+
"code_executor_agent",
35+
llm_config=False, # Turn off LLM for this agent.
36+
code_execution_config={"executor": executor}, # Use the local command line code executor.
37+
human_input_mode="ALWAYS", # Always take human input for this agent for safety.
38+
)
39+
40+
autogen_agent = AutoGenConversableAgent(conversable_agent=code_executor_agent)
41+
42+
message_with_code_block = """This is a message with code block.
43+
The code block is below:
44+
```python
45+
import numpy as np
46+
import matplotlib.pyplot as plt
47+
x = np.random.randint(0, 100, 100)
48+
y = np.random.randint(0, 100, 100)
49+
plt.scatter(x, y)
50+
plt.savefig('scatter.png')
51+
print('Scatter plot saved to scatter.png')
52+
```
53+
This is the end of the message.
54+
"""
55+
56+
async for content in autogen_agent.invoke(message=message_with_code_block):
57+
print(f"# {content.role} - {content.name or '*'}: '{content.content}'")
58+
59+
60+
if __name__ == "__main__":
61+
asyncio.run(main())
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
import asyncio
4+
import os
5+
from typing import Annotated, Literal
6+
7+
from autogen import ConversableAgent, register_function
8+
9+
from semantic_kernel.agents.autogen.autogen_conversable_agent import AutoGenConversableAgent
10+
from semantic_kernel.contents.function_call_content import FunctionCallContent
11+
from semantic_kernel.contents.function_result_content import FunctionResultContent
12+
13+
"""
14+
The following sample demonstrates how to use the AutoGenConversableAgent to create a conversation between two agents
15+
where one agent suggests a tool function call and the other agent executes the tool function call.
16+
17+
In this example, the assistant agent suggests a calculator tool function call to the user proxy agent. The user proxy
18+
agent executes the calculator tool function call. The assistant agent and the user proxy agent are created using the
19+
ConversableAgent class. The calculator tool function is registered with the assistant agent and the user proxy agent.
20+
21+
This sample follows the AutoGen flow outlined here:
22+
https://microsoft.github.io/autogen/0.2/docs/tutorial/tool-use
23+
"""
24+
25+
26+
Operator = Literal["+", "-", "*", "/"]
27+
28+
29+
async def main():
30+
def calculator(a: int, b: int, operator: Annotated[Operator, "operator"]) -> int:
31+
if operator == "+":
32+
return a + b
33+
if operator == "-":
34+
return a - b
35+
if operator == "*":
36+
return a * b
37+
if operator == "/":
38+
return int(a / b)
39+
raise ValueError("Invalid operator")
40+
41+
assistant = ConversableAgent(
42+
name="Assistant",
43+
system_message="You are a helpful AI assistant. "
44+
"You can help with simple calculations. "
45+
"Return 'TERMINATE' when the task is done.",
46+
# Note: the model "gpt-4o" leads to a "division by zero" error that doesn't occur with "gpt-4o-mini"
47+
# or even "gpt-4".
48+
llm_config={
49+
"config_list": [{"model": os.environ["OPENAI_CHAT_MODEL_ID"], "api_key": os.environ["OPENAI_API_KEY"]}]
50+
},
51+
)
52+
53+
# Create a Semantic Kernel AutoGenConversableAgent based on the AutoGen ConversableAgent.
54+
assistant_agent = AutoGenConversableAgent(conversable_agent=assistant)
55+
56+
user_proxy = ConversableAgent(
57+
name="User",
58+
llm_config=False,
59+
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
60+
human_input_mode="NEVER",
61+
)
62+
63+
assistant.register_for_llm(name="calculator", description="A simple calculator")(calculator)
64+
65+
# Register the tool function with the user proxy agent.
66+
user_proxy.register_for_execution(name="calculator")(calculator)
67+
68+
register_function(
69+
calculator,
70+
caller=assistant, # The assistant agent can suggest calls to the calculator.
71+
executor=user_proxy, # The user proxy agent can execute the calculator calls.
72+
name="calculator", # By default, the function name is used as the tool name.
73+
description="A simple calculator", # A description of the tool.
74+
)
75+
76+
# Create a Semantic Kernel AutoGenConversableAgent based on the AutoGen ConversableAgent.
77+
user_proxy_agent = AutoGenConversableAgent(conversable_agent=user_proxy)
78+
79+
async for content in user_proxy_agent.invoke(
80+
recipient=assistant_agent,
81+
message="What is (44232 + 13312 / (232 - 32)) * 5?",
82+
max_turns=10,
83+
):
84+
for item in content.items:
85+
match item:
86+
case FunctionResultContent(result=r):
87+
print(f"# {content.role} - {content.name or '*'}: '{r}'")
88+
case FunctionCallContent(function_name=fn, arguments=arguments):
89+
print(f"# {content.role} - {content.name or '*'}: Function Name: '{fn}', Arguments: '{arguments}'")
90+
case _:
91+
print(f"# {content.role} - {content.name or '*'}: '{content.content}'")
92+
93+
94+
if __name__ == "__main__":
95+
asyncio.run(main())
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
import asyncio
4+
import os
5+
6+
from autogen import ConversableAgent
7+
8+
from semantic_kernel.agents.autogen.autogen_conversable_agent import AutoGenConversableAgent
9+
10+
"""
11+
The following sample demonstrates how to use the AutoGenConversableAgent to create a conversation between two agents
12+
where one agent suggests a joke and the other agent generates a joke.
13+
14+
The sample follows the AutoGen flow outlined here:
15+
https://microsoft.github.io/autogen/0.2/docs/tutorial/introduction#roles-and-conversations
16+
"""
17+
18+
19+
async def main():
20+
cathy = ConversableAgent(
21+
"cathy",
22+
system_message="Your name is Cathy and you are a part of a duo of comedians.",
23+
llm_config={
24+
"config_list": [
25+
{
26+
"model": os.environ["OPENAI_CHAT_MODEL_ID"],
27+
"temperature": 0.9,
28+
"api_key": os.environ.get("OPENAI_API_KEY"),
29+
}
30+
]
31+
},
32+
human_input_mode="NEVER", # Never ask for human input.
33+
)
34+
35+
cathy_autogen_agent = AutoGenConversableAgent(conversable_agent=cathy)
36+
37+
joe = ConversableAgent(
38+
"joe",
39+
system_message="Your name is Joe and you are a part of a duo of comedians.",
40+
llm_config={
41+
"config_list": [
42+
{
43+
"model": os.environ["OPENAI_CHAT_MODEL_ID"],
44+
"temperature": 0.7,
45+
"api_key": os.environ.get("OPENAI_API_KEY"),
46+
}
47+
]
48+
},
49+
human_input_mode="NEVER", # Never ask for human input.
50+
)
51+
52+
joe_autogen_agent = AutoGenConversableAgent(conversable_agent=joe)
53+
54+
async for content in cathy_autogen_agent.invoke(
55+
recipient=joe_autogen_agent, message="Tell me a joke about the stock market.", max_turns=3
56+
):
57+
print(f"# {content.role} - {content.name or '*'}: '{content.content}'")
58+
59+
60+
if __name__ == "__main__":
61+
asyncio.run(main())
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
## AutoGen Conversable Agent (v0.2.X)
2+
3+
Semantic Kernel Python supports running AutoGen Conversable Agents provided in the 0.2.X package.
4+
5+
### Limitations
6+
7+
Currently, there are some limitations to note:
8+
9+
- AutoGen Conversable Agents in Semantic Kernel run asynchronously and do not support streaming of agent inputs or responses.
10+
- The `AutoGenConversableAgent` in Semantic Kernel Python cannot be configured as part of a Semantic Kernel `AgentGroupChat`. As we progress towards GA for our agent group chat patterns, we will explore ways to integrate AutoGen agents into a Semantic Kernel group chat scenario.
11+
12+
### Installation
13+
14+
Install the `semantic-kernel` package with the `autogen` extra:
15+
16+
```bash
17+
pip install semantic-kernel[autogen]
18+
```
19+
20+
For an example of how to integrate an AutoGen Conversable Agent using the Semantic Kernel Agent abstraction, please refer to [`autogen_conversable_agent_simple_convo.py`](../../../samples/concepts/agents/autogen_conversable_agent/autogen_conversable_agent_simple_convo.py).
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
from semantic_kernel.agents.autogen.autogen_conversable_agent import AutoGenConversableAgent
4+
5+
__all__ = ["AutoGenConversableAgent"]
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
import logging
4+
from collections.abc import AsyncIterable, Callable
5+
from typing import TYPE_CHECKING, Any
6+
7+
from autogen import ConversableAgent
8+
9+
from semantic_kernel.agents.agent import Agent
10+
from semantic_kernel.contents.chat_message_content import ChatMessageContent
11+
from semantic_kernel.contents.function_call_content import FunctionCallContent
12+
from semantic_kernel.contents.function_result_content import FunctionResultContent
13+
from semantic_kernel.contents.text_content import TextContent
14+
from semantic_kernel.contents.utils.author_role import AuthorRole
15+
from semantic_kernel.exceptions.agent_exceptions import AgentInvokeException
16+
from semantic_kernel.functions.kernel_arguments import KernelArguments
17+
18+
if TYPE_CHECKING:
19+
from autogen.cache import AbstractCache
20+
21+
from semantic_kernel.kernel import Kernel
22+
23+
logger: logging.Logger = logging.getLogger(__name__)
24+
25+
26+
class AutoGenConversableAgent(Agent):
27+
"""A Semantic Kernel wrapper around an AutoGen 0.2 `ConversableAgent`.
28+
29+
This allows one to use it as a Semantic Kernel `Agent`. Note: this agent abstraction
30+
does not currently allow for the use of AgentGroupChat within Semantic Kernel.
31+
"""
32+
33+
conversable_agent: ConversableAgent
34+
35+
def __init__(self, conversable_agent: ConversableAgent, **kwargs: Any) -> None:
36+
"""Initialize the AutoGenConversableAgent.
37+
38+
Args:
39+
conversable_agent: The existing AutoGen 0.2 ConversableAgent instance
40+
kwargs: Other Agent base class arguments (e.g. name, id, instructions)
41+
"""
42+
args: dict[str, Any] = {
43+
"name": conversable_agent.name,
44+
"description": conversable_agent.description,
45+
"instructions": conversable_agent.system_message,
46+
"conversable_agent": conversable_agent,
47+
}
48+
49+
if kwargs:
50+
args.update(kwargs)
51+
52+
super().__init__(**args)
53+
54+
async def invoke(
55+
self,
56+
*,
57+
recipient: "AutoGenConversableAgent | None" = None,
58+
clear_history: bool = True,
59+
silent: bool = True,
60+
cache: "AbstractCache | None" = None,
61+
max_turns: int | None = None,
62+
summary_method: str | Callable | None = ConversableAgent.DEFAULT_SUMMARY_METHOD,
63+
summary_args: dict | None = {},
64+
message: dict | str | Callable | None = None,
65+
**kwargs: Any,
66+
) -> AsyncIterable[ChatMessageContent]:
67+
"""A direct `invoke` method for the ConversableAgent.
68+
69+
Args:
70+
recipient: The recipient ConversableAgent to chat with
71+
clear_history: Whether to clear the chat history before starting. True by default.
72+
silent: Whether to suppress console output. True by default.
73+
cache: The cache to use for storing chat history
74+
max_turns: The maximum number of turns to chat for
75+
summary_method: The method to use for summarizing the chat
76+
summary_args: The arguments to pass to the summary method
77+
message: The initial message to send. If message is not provided,
78+
the agent will wait for the user to provide the first message.
79+
kwargs: Additional keyword arguments
80+
"""
81+
if recipient is not None:
82+
if not isinstance(recipient, AutoGenConversableAgent):
83+
raise AgentInvokeException(
84+
f"Invalid recipient type: {type(recipient)}. "
85+
"Recipient must be an instance of AutoGenConversableAgent."
86+
)
87+
88+
chat_result = await self.conversable_agent.a_initiate_chat(
89+
recipient=recipient.conversable_agent,
90+
clear_history=clear_history,
91+
silent=silent,
92+
cache=cache,
93+
max_turns=max_turns,
94+
summary_method=summary_method,
95+
summary_args=summary_args,
96+
message=message, # type: ignore
97+
**kwargs,
98+
)
99+
100+
logger.info(f"Called AutoGenConversableAgent.a_initiate_chat with recipient: {recipient}")
101+
102+
for message in chat_result.chat_history:
103+
yield AutoGenConversableAgent._to_chat_message_content(message) # type: ignore
104+
else:
105+
reply = await self.conversable_agent.a_generate_reply(
106+
messages=[{"role": "user", "content": message}],
107+
)
108+
109+
logger.info(f"Called AutoGenConversableAgent.a_generate_reply with recipient: {recipient}")
110+
111+
if isinstance(reply, str):
112+
yield ChatMessageContent(content=reply, role=AuthorRole.ASSISTANT)
113+
elif isinstance(reply, dict):
114+
yield ChatMessageContent(**reply)
115+
else:
116+
raise AgentInvokeException(f"Unexpected reply type from `a_generate_reply`: {type(reply)}")
117+
118+
async def invoke_stream(
119+
self,
120+
message: str,
121+
kernel: "Kernel | None" = None,
122+
arguments: KernelArguments | None = None,
123+
**kwargs: Any,
124+
) -> AsyncIterable[ChatMessageContent]:
125+
"""Invoke the agent with a stream of messages."""
126+
raise NotImplementedError("The AutoGenConversableAgent does not support streaming.")
127+
128+
@staticmethod
129+
def _to_chat_message_content(message: dict[str, Any]) -> ChatMessageContent:
130+
"""Translate an AutoGen message to a Semantic Kernel ChatMessageContent."""
131+
items: list[TextContent | FunctionCallContent | FunctionResultContent] = []
132+
role = AuthorRole(message.get("role"))
133+
name: str = message.get("name", "")
134+
135+
content = message.get("content")
136+
if content is not None:
137+
text = TextContent(text=content)
138+
items.append(text)
139+
140+
if role == AuthorRole.ASSISTANT:
141+
tool_calls = message.get("tool_calls")
142+
if tool_calls is not None:
143+
for tool_call in tool_calls:
144+
items.append(
145+
FunctionCallContent(
146+
id=tool_call.get("id"),
147+
function_name=tool_call.get("name"),
148+
arguments=tool_call.get("function").get("arguments"),
149+
)
150+
)
151+
152+
if role == AuthorRole.TOOL:
153+
tool_responses = message.get("tool_responses")
154+
if tool_responses is not None:
155+
for tool_response in tool_responses:
156+
items.append(
157+
FunctionResultContent(
158+
id=tool_response.get("tool_call_id"),
159+
result=tool_response.get("content"),
160+
)
161+
)
162+
163+
return ChatMessageContent(role=role, items=items, name=name) # type: ignore
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
from unittest.mock import AsyncMock, MagicMock
4+
5+
import pytest
6+
from autogen import ConversableAgent
7+
8+
from semantic_kernel.agents.autogen.autogen_conversable_agent import AutoGenConversableAgent
9+
from semantic_kernel.contents.utils.author_role import AuthorRole
10+
from semantic_kernel.exceptions.agent_exceptions import AgentInvokeException
11+
12+
13+
@pytest.fixture
14+
def mock_conversable_agent():
15+
agent = MagicMock(spec=ConversableAgent)
16+
agent.name = "MockName"
17+
agent.description = "MockDescription"
18+
agent.system_message = "MockSystemMessage"
19+
return agent
20+
21+
22+
async def test_autogen_conversable_agent_initialization(mock_conversable_agent):
23+
agent = AutoGenConversableAgent(mock_conversable_agent, id="mock_id")
24+
assert agent.name == "MockName"
25+
assert agent.description == "MockDescription"
26+
assert agent.instructions == "MockSystemMessage"
27+
assert agent.conversable_agent == mock_conversable_agent
28+
29+
30+
async def test_autogen_conversable_agent_invoke_with_recipient(mock_conversable_agent):
31+
mock_conversable_agent.a_initiate_chat = AsyncMock()
32+
mock_conversable_agent.a_initiate_chat.return_value = MagicMock(
33+
chat_history=[
34+
{"role": "user", "content": "Hello from user!"},
35+
{"role": "assistant", "content": "Hello from assistant!"},
36+
]
37+
)
38+
agent = AutoGenConversableAgent(mock_conversable_agent)
39+
recipient_agent = MagicMock(spec=AutoGenConversableAgent)
40+
recipient_agent.conversable_agent = MagicMock(spec=ConversableAgent)
41+
42+
messages = []
43+
async for msg in agent.invoke(recipient=recipient_agent, message="Test message", arg1="arg1"):
44+
messages.append(msg)
45+
46+
mock_conversable_agent.a_initiate_chat.assert_awaited_once()
47+
assert len(messages) == 2
48+
assert messages[0].role == AuthorRole.USER
49+
assert messages[0].content == "Hello from user!"
50+
assert messages[1].role == AuthorRole.ASSISTANT
51+
assert messages[1].content == "Hello from assistant!"
52+
53+
54+
async def test_autogen_conversable_agent_invoke_without_recipient_string_reply(mock_conversable_agent):
55+
mock_conversable_agent.a_generate_reply = AsyncMock(return_value="Mocked assistant response")
56+
agent = AutoGenConversableAgent(mock_conversable_agent)
57+
58+
messages = []
59+
async for msg in agent.invoke(message="Hello"):
60+
messages.append(msg)
61+
62+
mock_conversable_agent.a_generate_reply.assert_awaited_once()
63+
assert len(messages) == 1
64+
assert messages[0].role == AuthorRole.ASSISTANT
65+
assert messages[0].content == "Mocked assistant response"
66+
67+
68+
async def test_autogen_conversable_agent_invoke_without_recipient_dict_reply(mock_conversable_agent):
69+
mock_conversable_agent.a_generate_reply = AsyncMock(
70+
return_value={
71+
"content": "Mocked assistant response",
72+
"role": "assistant",
73+
"name": "AssistantName",
74+
}
75+
)
76+
agent = AutoGenConversableAgent(mock_conversable_agent)
77+
78+
messages = []
79+
async for msg in agent.invoke(message="Hello"):
80+
messages.append(msg)
81+
82+
mock_conversable_agent.a_generate_reply.assert_awaited_once()
83+
assert len(messages) == 1
84+
assert messages[0].role == AuthorRole.ASSISTANT
85+
assert messages[0].content == "Mocked assistant response"
86+
assert messages[0].name == "AssistantName"
87+
88+
89+
async def test_autogen_conversable_agent_invoke_without_recipient_unexpected_type(mock_conversable_agent):
90+
mock_conversable_agent.a_generate_reply = AsyncMock(return_value=12345)
91+
agent = AutoGenConversableAgent(mock_conversable_agent)
92+
93+
with pytest.raises(AgentInvokeException):
94+
async for _ in agent.invoke(message="Hello"):
95+
pass
96+
97+
98+
async def test_autogen_conversable_agent_invoke_with_invalid_recipient_type(mock_conversable_agent):
99+
mock_conversable_agent.a_generate_reply = AsyncMock(return_value=12345)
100+
agent = AutoGenConversableAgent(mock_conversable_agent)
101+
102+
recipient = MagicMock()
103+
104+
with pytest.raises(AgentInvokeException):
105+
async for _ in agent.invoke(recipient=recipient, message="Hello"):
106+
pass

‎python/uv.lock

+178-66
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)
Please sign in to comment.