-
Notifications
You must be signed in to change notification settings - Fork 725
fix(ollama): type error in dict combination of ollama instrumentation #2814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested. Reviewed everything up to 48de692 in 1 minute and 58 seconds
More details
- Looked at
74
lines of code in3
files - Skipped
1
files when reviewing. - Skipped posting
8
drafted comments based on config settings.
1. packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py:245
- Draft comment:
Using res.model_dump() converts res to dict; ensure res always supports model_dump() to avoid runtime AttributeError if its type changes. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50%
This appears to be a change in how the response object is handled, going from using res directly to using res.model_dump(). The comment raises a valid concern about runtime errors if res doesn't support model_dump(). However, this seems to be adapting to Ollama's API which likely uses Pydantic models, where model_dump() is a standard method. The comment is asking for verification rather than pointing out a clear issue.
I may be making assumptions about Ollama's API. The comment could be highlighting a genuine risk if the API response format is not guaranteed to be a Pydantic model.
While there's some validity to the concern, this appears to be an adaptation to Ollama's API which uses Pydantic models. The comment is asking for verification rather than identifying a clear issue that needs fixing.
Delete the comment because it's asking for verification rather than identifying a clear issue, and is making speculative assumptions about potential future API changes.
2. packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py:264
- Draft comment:
Ensure that res.model_dump() is valid for all response item types in async streaming; otherwise an AttributeError may occur. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the author to ensure that a specific function call is valid for all response item types, which is a form of asking the author to double-check their work. This violates the rule against asking the author to ensure behavior is intended or tested.
3. packages/sample-app/pyproject.toml:56
- Draft comment:
Dependency on ollama added with version '^0.4.7'. Confirm this version aligns with instrumentation expectations. - Reason this comment was not posted:
Confidence changes required:50%
<= threshold50%
None
4. packages/sample-app/sample_app/ollama_streaming.py:1
- Draft comment:
New sample app file for streaming responses looks well-structured and valid for testing Ollama instrumentation. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
5. packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py:264
- Draft comment:
Similar to the synchronous case, ensure that calling res.model_dump() in the async accumulator is safe. Consider refactoring the merge logic to reduce duplication. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50%
The comment has two parts: 1) Checking if model_dump() is safe in async - but since both sync/async were updated the same way, this seems unnecessary. 2) Suggesting refactoring duplicate code - while there is duplication, this is a common pattern for handling both sync/async versions and may be intentional. The comment doesn't provide concrete refactoring suggestions.
The duplication between sync/async handlers is a real issue that could be improved. And we don't have full context about whether model_dump() could have async-specific issues.
However, the sync/async duplication pattern is very common in Python and may be preferred for clarity. The model_dump() change was applied identically to both versions, suggesting it's already been validated as safe.
The comment should be deleted as it raises speculative concerns about model_dump() safety and makes a vague suggestion about refactoring without providing actionable guidance.
6. packages/sample-app/sample_app/ollama_streaming.py:33
- Draft comment:
Please add a newline at the end of the file to follow best practices. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50%
While having a trailing newline is indeed a best practice, this is a very minor issue that would typically be handled by editor configurations or linters. It doesn't affect the logic or functionality of the code. Our rules state not to make comments that are obvious or unimportant, and this falls into that category.
A missing trailing newline can cause issues with some Unix tools and make diffs less clean. It's a widely accepted best practice that has practical implications.
While the practice has merit, this is exactly the kind of minor issue that should be handled by automated tooling rather than manual review comments. It doesn't warrant human intervention.
Delete this comment. This is a minor formatting issue that should be handled by automated tooling rather than PR comments.
7. packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py:249
- Draft comment:
The async function '_aaccumulate_streaming_response' appears to have a typo with an extra 'a'. Consider renaming it to '_accumulate_streaming_response' for consistency with its synchronous counterpart and to improve readability. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
8. packages/sample-app/sample_app/ollama_streaming.py:33
- Draft comment:
The file is missing a newline at the end. Please add a newline for improved code style. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50%
While missing newline at end of file is a common style issue, it's a very minor one. Most modern editors and git will warn about this automatically. It's not a logic issue or something that affects functionality. The rules state we should not make purely informative comments or obvious/unimportant ones.
The missing newline could cause issues with some tools or when concatenating files. It is a widely accepted convention to end files with newlines.
While true, this is still a very minor issue that most development environments will handle automatically. It doesn't warrant a PR comment.
Delete this comment as it's too minor of a style issue to warrant a PR comment. Modern tools will handle this automatically.
Workflow ID: wflow_5icTEDt7uO911Tfo
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/__init__.py
Outdated
Show resolved
Hide resolved
48de692
to
2e37bf9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @minimAluminiumalism - can you take fix the build / lint issues? Also, can you add tests?
2e37bf9
to
9531f38
Compare
The build issue appears to have been caused by a missing poetry.lock file, which I've fixed. There are existing tests in |
Can you rerun the CI/CD pipeline? |
9531f38
to
d725c51
Compare
@minimAluminiumalism done - sorry that's a github limitation for first time contributors in a repo :/ |
d725c51
to
52f418b
Compare
Fixed and I think it's ok now. |
4b04dde
to
2efea6a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@minimAluminiumalism make sure to run nx run-many -t lock
from the root to fix the error
2efea6a
to
40c4ceb
Compare
40c4ceb
to
58f9a1d
Compare
Sorry I missed checking that ;) I suppose everything is ok now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @minimAluminiumalism!
feat(instrumentation): ...
orfix(instrumentation): ...
.Related to #2811
In the latest ollama 0.4.7 release, combining dictionaries throws an error because
res
is of typeChatResponse
instead of dict. However, the current test cases use Ollama version 0.3.2, which doesn't exhibit this error. I've added compatibility for older versions of ollama whereres
is a dict.Important
Fix type error in Ollama instrumentation by using
model_dump()
for dict combination and updateollama
dependency._accumulate_streaming_response()
and_aaccumulate_streaming_response()
in__init__.py
by usingres.model_dump()
for dictionary combination.ollama
dependency to version^0.4.7
inpyproject.toml
.ollama_streaming.py
to demonstrate streaming responses using the Ollama chat model.This description was created by
for 48de692. It will automatically update as commits are pushed.