Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Processing memory with status, capturing to, get list and more. #930

Merged
merged 11 commits into from
Sep 28, 2024

Conversation

beastoin
Copy link
Collaborator

@beastoin beastoin commented Sep 27, 2024

Issue: #868

Summary by Entelligence.AI

  • New Feature: Added functionality to track and manage the status of processing memories in both backend and frontend. This includes fetching, updating, and displaying the status of processing memories.
  • New Feature: Introduced new endpoints and functions for retrieving and updating processing memories based on their status.
  • New Feature: Enhanced UI to display a waiting message when not connected but capturing memory, and added visual indicators for new memories.
  • Refactor: Improved state management with the addition of timers for tracking processing memories and better resource cleanup on dispose.
  • Refactor: Streamlined codebase by removing unused imports, classes, and methods, and refactoring existing code for better organization and efficiency.

Copy link

github-actions bot commented Sep 27, 2024

Image description Entelligence.AI

Walkthrough

This comprehensive update introduces new functionality for processing memories, including status tracking and time capturing. It enhances the backend with new endpoints and models, while the frontend receives UI updates to reflect memory processing status. The changes also improve state management and event handling, providing a more robust and user-friendly experience.

Changes

Files Summary
backend/database/processing_memories.py, backend/utils/processing_memories.py, backend/models/processing_memory.py, backend/routers/processing_memories.py, backend/routers/transcribe.py, backend/models/message_event.py Introduced new functions and models for retrieving and updating processing memories. Enhanced logic for creating and processing memory, including status changes and time calculations. Added message events for status changes.
app/lib/backend/http/api/processing_memories.dart, app/lib/backend/http/api/memories.dart, app/lib/backend/schema/memory.dart, app/lib/backend/schema/message_event.dart, app/lib/pages/memories/widgets/capture.dart, app/lib/pages/memories/widgets/processing_capture.dart, app/lib/providers/capture_provider.dart, app/lib/providers/memory_provider.dart, app/lib/pages/memories/page.dart, app/lib/pages/processing_memories/page.dart, app/lib/services/sockets/pure_socket.dart, app/lib/services/sockets/transcription_connection.dart, app/lib/main.dart, app/lib/pages/memories/widgets/memory_list_item.dart Added new functions and classes for fetching and displaying processing memories. Updated UI based on memory processing status. Improved state management in CaptureProvider and MemoryProvider. Refactored socket services, listeners, and event handling.

🎉 Code is changed, memories are processed,

Status tracked, time captured, progress assessed.

With each update, our software grows,

To new heights, the user experience flows. 🚀


Uplevel your code reviews with Entelligence.AI Pro

Entelligence.AI Pro

If you like this project, please support us by purchasing the Pro version. The Pro version has advanced context, superior noise reduction and several proprietary improvements compared to the open source version. Moreover, Entelligence.AI Pro is free for open source projects.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between c3c6b20 and 7ac3a81 commits.
Files selected (5)
  • backend/database/processing_memories.py (2)
  • backend/models/processing_memory.py (2)
  • backend/routers/processing_memories.py (2)
  • backend/routers/transcribe.py (4)
  • backend/utils/processing_memories.py (3)
Review comments generated (16)
  • Review: 11
  • LGTM: 5

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines 58 to 65
def get_processing_memory(uid: str, id: str, ) -> BasicProcessingMemory:
processing_memory = processing_memories_db.get_processing_memory_by_id(uid, id)
if not processing_memory:
print("processing memory is not found")
return
processing_memory = BasicProcessingMemory(**processing_memory)

return processing_memory

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

There is a lack of error handling here. If the processing memory is not found, it simply prints a message and returns None. This could lead to unexpected behavior in the calling function if it expects a BasicProcessingMemory object. It would be better to raise an exception when the processing memory is not found.

-        print("processing memory is not found")
-        return
+        raise ValueError("Processing memory not found for given uid and id.")

Comment on lines 41 to 63
capturing_to: Optional[datetime] = None
status: Optional[ProcessingMemoryStatus] = None
geolocation: Optional[Geolocation] = None
emotional_feedback: Optional[bool] = False

memory_id: Optional[str] = None


class UpdateProcessingMemory(BaseModel):
id: Optional[str] = None
capturing_to: Optional[datetime] = None
geolocation: Optional[Geolocation] = None
emotional_feedback: Optional[bool] = False


class UpdateProcessingMemoryResponse(BaseModel):
result: BasicProcessingMemory

class BasicProcessingMemoryResponse(BaseModel):
result: BasicProcessingMemory

class BasicProcessingMemoriesResponse(BaseModel):
result: [BasicProcessingMemory]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The addition of the capturing_to and status fields in the UpdateProcessingMemory model is consistent with the changes made to the ProcessingMemory model. The new response models UpdateProcessingMemoryResponse, BasicProcessingMemoryResponse, and BasicProcessingMemoriesResponse have been added correctly. However, there seems to be a typo in the result field of the BasicProcessingMemoriesResponse model. It should be a list of BasicProcessingMemory objects, not a single object.

- result: [BasicProcessingMemory]
+ result: List[BasicProcessingMemory]

Comment on lines 34 to 66
@router.get("/v1/processing-memories/{processing_memory_id}", response_model=BasicProcessingMemoryResponse,
tags=['processing_memories'])
def get_processing_memory(
processing_memory_id: str,
uid: str = Depends(auth.get_current_user_uid)
):
"""
Get ProcessingMemory endpoint.
:param processing_memory_id:
:param uid: user id.
:return: The processing_memory.
"""

update_processing_memory.id = processing_memory_id
processing_memory = processing_memory_utils.get_processing_memory(uid, processing_memory_id)
if not processing_memory:
raise HTTPException(status_code=404, detail="Processing memory not found")

return BasicProcessingMemoryResponse(result=processing_memory)

@router.get("/v1/processing-memories", response_model=BasicProcessingMemoriesResponse,
tags=['processing_memories'])
def list_processing_memory(uid: str = Depends(auth.get_current_user_uid),):
"""
List ProcessingMemory endpoint.
:param uid: user id.
:return: The list of processing_memories.
"""
processing_memories = processing_memory_utils.get_processing_memories(uid)
if not processing_memories or len(processing_memories) == 0:
return BasicProcessingMemoriesResponse(result=[])

return BasicProcessingMemoriesResponse(result=list(processing_memories))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

In the get_processing_memory function, there seems to be a typo on line 47 where you're trying to set the id on update_processing_memory instead of a new variable. This will likely cause an error as update_processing_memory is not defined in this context. You probably meant to set the id on a new instance of BasicProcessingMemory or similar.

- 47:     update_processing_memory.id = processing_memory_id
+ 47:     basic_processing_memory.id = processing_memory_id
    processing_memory = processing_memory_utils.get_processing_memory(uid, processing_memory_id)

Also, similar to the update_processing_memory function, the get_processing_memory function doesn't check if the user has the right to access the processing memory with the given ID. You should add a check to ensure that the user is the owner of the processing memory before returning it.

    processing_memory = processing_memory_utils.get_processing_memory(uid, processing_memory_id)
+    if processing_memory.uid != uid:
+        raise HTTPException(status_code=403, detail="User does not have permission to access this processing memory")
    if not processing_memory:
        raise HTTPException(status_code=404, detail="Processing memory not found")

In the list_processing_memory function, you're returning an empty list when there are no processing memories. This is fine, but it might be more informative to return a message indicating that there are no processing memories instead of just an empty list.

        return BasicProcessingMemoriesResponse(result=[])
+        return BasicProcessingMemoriesResponse(message="No processing memories found", result=[])

Comment on lines 7 to 15
def get_processing_memories(uid: str, statuses: [str], limit: int = 5):
processing_memories_ref = (
db.collection('users').document(uid).collection('processing_memories')
)
if len(statuses) > 0:
processing_memories_ref = processing_memories_ref.where(filter=FieldFilter('status', 'in', statuses))
processing_memories_ref = processing_memories_ref.order_by('created_at', direction=firestore.Query.DESCENDING)
processing_memories_ref = processing_memories_ref.limit(limit)
return [doc.to_dict() for doc in processing_memories_ref.stream()]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The get_processing_memories function is missing error handling for the Firestore operations. If there's an issue with the Firestore operation (e.g., network issues, permission issues), the function will throw an exception and potentially crash the application. It would be better to wrap these operations in a try/except block and handle any exceptions that may occur.

def get_processing_memories(uid: str, statuses: [str], limit: int = 5):
    processing_memories_ref = (
        db.collection('users').document(uid).collection('processing_memories')
    )
    if len(statuses) > 0:
        processing_memories_ref = processing_memories_ref.where(filter=FieldFilter('status', 'in', statuses))
    processing_memories_ref = processing_memories_ref.order_by('created_at', direction=firestore.Query.DESCENDING)
    processing_memories_ref = processing_memories_ref.limit(limit)
    return [doc.to_dict() for doc in processing_memories_ref.stream()]
+    try:
+        return [doc.to_dict() for doc in processing_memories_ref.stream()]
+    except Exception as e:
+        # Log the error and re-raise or handle it appropriately.
+        raise

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 7ac3a81 and 6e6e88e commits.
Files selected (6)
  • app/lib/backend/http/api/processing_memories.dart (1)
  • app/lib/backend/schema/memory.dart (4)
  • app/lib/pages/memories/widgets/capture.dart (1)
  • app/lib/pages/memories/widgets/processing_capture.dart (4)
  • app/lib/providers/capture_provider.dart (5)
  • backend/models/processing_memory.py (2)
Files skipped from review due to trivial changes (1)
  • app/lib/pages/memories/widgets/capture.dart
Review comments generated (19)
  • Review: 15
  • LGTM: 4

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines +111 to +116
// Processing memory
var provider = context.read<CaptureProvider>();
bool havingCapturingMemory = provider.capturingProcessingMemory != null;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This code introduces a new boolean variable havingCapturingMemory to check if a processing memory is being captured. It's a good practice to separate such checks into their own variables for readability and maintainability.

Comment on lines +3 to +14
from enum import Enum

from pydantic import BaseModel

from models.memory import Geolocation
from models.transcript_segment import TranscriptSegment


class ProcessingMemoryStatus(str, Enum):
Capturing = 'capturing'
Processing = 'processing'
Done = 'done'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The introduction of the ProcessingMemoryStatus enum is a good practice as it provides a clear and consistent way to represent the different statuses of processing memories. However, it would be beneficial to add some comments explaining what each status represents for better maintainability.

+ # Enum to represent the different statuses of processing memories
  class ProcessingMemoryStatus(str, Enum):
      Capturing = 'capturing'  # Memory is currently being captured
      Processing = 'processing'  # Memory capture is complete and is now being processed
      Done = 'done'  # Processing of memory is complete

Comment on lines 17 to +24
class ProcessingMemory(BaseModel):
id: str
session_id: Optional[str] = None
session_ids: List[str] = []
audio_url: Optional[str] = None
created_at: datetime
capturing_to: Optional[datetime] = None
status: Optional[ProcessingMemoryStatus] = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The addition of capturing_to and status fields in the ProcessingMemory model seems appropriate given the context. However, please ensure that these new fields are properly handled in all places where instances of this model are created or updated.

Comment on lines +41 to +42
capturing_to: Optional[datetime] = None
status: Optional[ProcessingMemoryStatus] = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Similar to the previous comment, the addition of capturing_to and status fields in the UpdateProcessingMemory model seems appropriate. Again, ensure these fields are properly handled wherever this model is used.

Comment on lines 50 to +61
id: Optional[str] = None
capturing_to: Optional[datetime] = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The addition of capturing_to field in the UpdateProcessingMemory model is fine. Ensure that this field is properly handled in all places where instances of this model are created or updated.

Comment on lines 77 to 102
class ServerProcessingMemory {
final String id;
final DateTime createdAt;
final DateTime? startedAt;
final DateTime? capturingTo;
final ServerProcessingMemoryStatus? status;

ServerProcessingMemory({
required this.id,
required this.createdAt,
this.startedAt,
this.capturingTo,
this.status,
});

factory ServerProcessingMemory.fromJson(Map<String, dynamic> json) {
return ServerProcessingMemory(
id: json['id'],
createdAt: DateTime.parse(json['created_at']).toLocal(),
startedAt: json['started_at'] != null ? DateTime.parse(json['started_at']).toLocal() : null,
capturingTo: json['capturing_to'] != null ? DateTime.parse(json['capturing_to']).toLocal() : null,
status: json['status'] != null ? ServerProcessingMemoryStatus.valuesFromString(json['status']) : null,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The ServerProcessingMemory class has been updated with new fields capturingTo and status. The changes look good as they align with the PR description. However, please ensure that the backend API is also updated to return these new fields in the response.

Comment on lines +107 to +116
'capturing_to': capturingTo?.toUtc().toIso8601String(),
'status': status.toString(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The toJson method of ServerProcessingMemory has been updated to include capturingTo and status. This change is consistent with the updates made to the ServerProcessingMemory class.

Comment on lines 156 to 224

// Fetch and watch capturing status
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(
id: processingMemoryId,
);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
}
_setCapturingProcessingMemory(result?.result);

// Update processing memory
_updateProcessingMemory();
}

void _setCapturingProcessingMemory(ServerProcessingMemory? pm) {
debugPrint("${pm?.toJson()}");
if (pm != null &&
pm.status == ServerProcessingMemoryStatus.capturing &&
pm.capturingTo != null &&
pm.capturingTo!.isAfter(DateTime.now())) {
capturingProcessingMemory = pm;

// TODO: thinh, might set pre-segments
} else {
capturingProcessingMemory = null;
}
notifyListeners();

// End
if (capturingProcessingMemory == null) {
return;
}

// Or watch
var id = capturingProcessingMemory!.id;
var delayMs = capturingProcessingMemory?.capturingTo != null
? capturingProcessingMemory!.capturingTo!.millisecondsSinceEpoch - DateTime.now().millisecondsSinceEpoch
: 2 * 60 * 1000; // 2m
if (delayMs > 0) {
_processingMemoryWatchTimer?.cancel();
_processingMemoryWatchTimer = Timer(Duration(milliseconds: delayMs), () async {
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(id: id);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
return;
}
_setCapturingProcessingMemory(result?.result);
});
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The _onProcessingMemoryCreated method has been significantly expanded to include fetching and watching the capturing status of processing memory. However, there's a potential issue with the timer setup for watching status updates. The timer is set based on the capturingTo field of the processing memory, but it doesn't account for possible delays in network communication or server processing. If the capturingTo time is very close to the current time, the timer might fire after the capturing has already ended. To mitigate this, consider adding a buffer time to the delay.

- var delayMs = capturingProcessingMemory?.capturingTo != null
-         ? capturingProcessingMemory!.capturingTo!.millisecondsSinceEpoch - DateTime.now().millisecondsSinceEpoch
-         : 2 * 60 * 1000; // 2m
+ var delayMs = capturingProcessingMemory?.capturingTo != null
+         ? max(0, capturingProcessingMemory!.capturingTo!.millisecondsSinceEpoch - DateTime.now().millisecondsSinceEpoch - BUFFER_TIME_MS)
+         : 2 * 60 * 1000; // 2m

Where BUFFER_TIME_MS is a constant representing the buffer time in milliseconds.

Comment on lines 49 to 71

Future<ProcessingMemoryResponse?> fetchProcessingMemoryServer({
required String id,
}) async {
var response = await makeApiCall(
url: '${Env.apiBaseUrl}v1/processing-memories/$id',
headers: {},
method: 'GET',
body: "",
);
if (response == null) return null;
debugPrint('updateProcessingMemoryServer: ${response.body}');
if (response.statusCode == 200) {
return ProcessingMemoryResponse.fromJson(jsonDecode(response.body));
} else {
// TODO: Server returns 304 doesn't recover
CrashReporting.reportHandledCrash(
Exception('Failed to create memory'),
StackTrace.current,
level: NonFatalExceptionLevel.info,
userAttributes: {
'response': response.body,
},
);
}
return null;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The fetchProcessingMemoryServer function is making an API call without any authentication headers. This could lead to unauthorized access to sensitive data. Please ensure that the necessary authorization headers are included in the API call.

Also, the error handling here seems to be incomplete. The code only handles the case where the status code is 200 and doesn't handle other potential HTTP status codes like 400 (Bad Request), 401 (Unauthorized), 403 (Forbidden), 404 (Not Found), etc. It's important to handle these cases to provide more specific error messages and potentially recover from errors when possible.

Future<ProcessingMemoryResponse?> fetchProcessingMemoryServer({
  required String id,
}) async {
  var response = await makeApiCall(
    url: '${Env.apiBaseUrl}v1/processing-memories/$id',
-   headers: {},
+   headers: {
+     'Authorization': 'Bearer ${your_token_here}',
+   },
    method: 'GET',
    body: "",
  );
  if (response == null) return null;
  debugPrint('updateProcessingMemoryServer: ${response.body}');
  if (response.statusCode == 200) {
    return ProcessingMemoryResponse.fromJson(jsonDecode(response.body));
  } else if (response.statusCode == 400) {
    // Handle bad request
  } else if (response.statusCode == 401) {
    // Handle unauthorized
  } else if (response.statusCode == 403) {
    // Handle forbidden
  } else if (response.statusCode == 404) {
    // Handle not found
  } else {
    // TODO: Server returns 304 doesn't recover
    CrashReporting.reportHandledCrash(
      Exception('Failed to create memory'),
      StackTrace.current,
      level: NonFatalExceptionLevel.info,
      userAttributes: {
        'response': response.body,
      },
    );
  }
  return null;
}

Please replace 'Bearer ${your_token_here}' with the actual token.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 6e6e88e and 4304ee4 commits.
Files selected (5)
  • app/lib/backend/schema/memory.dart (4)
  • app/lib/providers/capture_provider.dart (8)
  • backend/models/processing_memory.py (2)
  • backend/routers/processing_memories.py (2)
  • backend/utils/processing_memories.py (3)
Review comments generated (21)
  • Review: 15
  • LGTM: 6

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines 58 to 72
def get_processing_memory(uid: str, id: str, ) -> DetailProcessingMemory:
processing_memory = processing_memories_db.get_processing_memory_by_id(uid, id)
if not processing_memory:
print("processing memory is not found")
return
processing_memory = DetailProcessingMemory(**processing_memory)

return processing_memory

def get_processing_memories(uid: str) -> [BasicProcessingMemory]:
processing_memories = processing_memories_db.get_processing_memories(uid, statuses=[ProcessingMemoryStatus.Processing], limit=5)
if not processing_memories or len(processing_memories) == 0:
return []

return [BasicProcessingMemory(**processing_memory) for processing_memory in processing_memories]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The function get_processing_memory does not handle the case when a processing memory is not found. It simply prints a message and returns None. This could lead to unexpected behavior in the calling code if it expects a DetailProcessingMemory object. Consider raising an exception instead.

-        print("processing memory is not found")
-        return
+        raise ValueError("Processing memory not found")

Comment on lines 34 to 66
@router.get("/v1/processing-memories/{processing_memory_id}", response_model=DetailProcessingMemoryResponse,
tags=['processing_memories'])
def get_processing_memory(
processing_memory_id: str,
uid: str = Depends(auth.get_current_user_uid)
):
"""
Get ProcessingMemory endpoint.
:param processing_memory_id:
:param uid: user id.
:return: The processing_memory.
"""

update_processing_memory.id = processing_memory_id
processing_memory = processing_memory_utils.get_processing_memory(uid, processing_memory_id)
if not processing_memory:
raise HTTPException(status_code=404, detail="Processing memory not found")

return DetailProcessingMemoryResponse(result=processing_memory)

@router.get("/v1/processing-memories", response_model=BasicProcessingMemoriesResponse,
tags=['processing_memories'])
def list_processing_memory(uid: str = Depends(auth.get_current_user_uid),):
"""
List ProcessingMemory endpoint.
:param uid: user id.
:return: The list of processing_memories.
"""
processing_memories = processing_memory_utils.get_processing_memories(uid)
if not processing_memories or len(processing_memories) == 0:
return BasicProcessingMemoriesResponse(result=[])

return BasicProcessingMemoriesResponse(result=list(processing_memories))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

In the function get_processing_memory, there seems to be a copy-paste error. The line update_processing_memory.id = processing_memory_id should not be present in this function as it is not relevant here. Please remove this line.

- 47:     update_processing_memory.id = processing_memory_id

Also, in the function list_processing_memory, you are checking if processing_memories is None or empty. If get_processing_memories always returns a list (even an empty one), you can simplify the check by removing the not processing_memories part.

- 63:     if not processing_memories or len(processing_memories) == 0:
+ 63:     if len(processing_memories) == 0:

Comment on lines 77 to 105
class ServerProcessingMemory {
final String id;
final DateTime createdAt;
final DateTime? startedAt;
final DateTime? capturingTo;
final ServerProcessingMemoryStatus? status;
final List<TranscriptSegment> transcriptSegments;

ServerProcessingMemory({
required this.id,
required this.createdAt,
this.startedAt,
this.capturingTo,
this.status,
this.transcriptSegments = const [],
});

factory ServerProcessingMemory.fromJson(Map<String, dynamic> json) {
return ServerProcessingMemory(
id: json['id'],
createdAt: DateTime.parse(json['created_at']).toLocal(),
startedAt: json['started_at'] != null ? DateTime.parse(json['started_at']).toLocal() : null,
capturingTo: json['capturing_to'] != null ? DateTime.parse(json['capturing_to']).toLocal() : null,
status: json['status'] != null ? ServerProcessingMemoryStatus.valuesFromString(json['status']) : null,
transcriptSegments: ((json['transcript_segments'] ?? []) as List<dynamic>)
.map((segment) => TranscriptSegment.fromJson(segment))
.toList(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The ServerProcessingMemory class has been updated with new fields capturingTo and status. The fromJson factory method has also been updated to parse these new fields from JSON. This is a good change as it allows the application to handle more detailed information about processing memories.

However, there's a potential issue with null safety. If the status field in the JSON is null or contains an unrecognized value, ServerProcessingMemoryStatus.valuesFromString(json['status']) will return null or unknown, which might not be the intended behavior. Consider throwing an error when the status is null or unrecognized to avoid silent failures.

- status: json['status'] != null ? ServerProcessingMemoryStatus.valuesFromString(json['status']) : null,
+ status: json['status'] != null ? ServerProcessingMemoryStatus.valuesFromString(json['status']) : throw SomeException("Invalid or missing status"),

Comment on lines +112 to +117
'capturing_to': capturingTo?.toUtc().toIso8601String(),
'status': status.toString(),
'transcript_segments': transcriptSegments.map((segment) => segment.toJson()).toList(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The toJson method of ServerProcessingMemory has been updated to include the new fields capturingTo and status. This is a necessary change for serialization of the updated class.

Comment on lines 41 to 82
capturing_to: Optional[datetime] = None
status: Optional[ProcessingMemoryStatus] = None
geolocation: Optional[Geolocation] = None
emotional_feedback: Optional[bool] = False

memory_id: Optional[str] = None

class DetailProcessingMemory(BaseModel):
id: str
timer_start: float
created_at: datetime
capturing_to: Optional[datetime] = None
status: Optional[ProcessingMemoryStatus] = None
geolocation: Optional[Geolocation] = None
emotional_feedback: Optional[bool] = False
transcript_segments: List[TranscriptSegment] = []
memory_id: Optional[str] = None


class UpdateProcessingMemory(BaseModel):
id: Optional[str] = None
capturing_to: Optional[datetime] = None
geolocation: Optional[Geolocation] = None
emotional_feedback: Optional[bool] = False


class UpdateProcessingMemoryResponse(BaseModel):
result: BasicProcessingMemory

class DetailProcessingMemoryResponse(BaseModel):
result: DetailProcessingMemory


class BasicProcessingMemoryResponse(BaseModel):
result: BasicProcessingMemory

class BasicProcessingMemoriesResponse(BaseModel):
result: List[BasicProcessingMemory]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The addition of capturing_to and status fields in other models (DetailProcessingMemory, UpdateProcessingMemory) and response models is consistent with the changes made to the ProcessingMemory model. This will allow the frontend to receive and send data related to the capturing time and status of processing memories. However, make sure to update all relevant API endpoints and database queries to handle these new fields correctly.

 class DetailProcessingMemory(BaseModel):
     id: str
     timer_start: float
     created_at: datetime
+     capturing_to: Optional[datetime] = None
+     status: Optional[ProcessingMemoryStatus] = None
     geolocation: Optional[Geolocation] = None
     emotional_feedback: Optional[bool] = False
     transcript_segments: List[TranscriptSegment] = []
     memory_id: Optional[str] = None

 class UpdateProcessingMemory(BaseModel):
     id: Optional[str] = None
+     capturing_to: Optional[datetime] = None
     geolocation: Optional[Geolocation] = None
     emotional_feedback: Optional[bool] = False

 class UpdateProcessingMemoryResponse(BaseModel):
+     result: BasicProcessingMemory

 class DetailProcessingMemoryResponse(BaseModel):
+     result: DetailProcessingMemory

 class BasicProcessingMemoryResponse(BaseModel):
+     result: BasicProcessingMemory

 class BasicProcessingMemoriesResponse(BaseModel):
+     result: List[BasicProcessingMemory]

Comment on lines 152 to 224

// Fetch and watch capturing status
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(
id: processingMemoryId,
);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
}
_setCapturingProcessingMemory(result?.result);

// Set pre-segements
if (capturingProcessingMemory != null && (capturingProcessingMemory?.transcriptSegments ?? []).isNotEmpty) {
List<TranscriptSegment> transcriptSegments = capturingProcessingMemory!.transcriptSegments;
if (segments.isNotEmpty) {
int idx = transcriptSegments.indexWhere((e) => e.end > segments.first.start) - 1;
if (idx >= 0) {
transcriptSegments = transcriptSegments.sublist(0, idx);
}
transcriptSegments.addAll(segments);
}
segments = transcriptSegments;
setHasTranscripts(segments.isNotEmpty);
}

// Update processing memory
_updateProcessingMemory();
}

void _setCapturingProcessingMemory(ServerProcessingMemory? pm) {
var now = DateTime.now();
debugPrint("${pm?.toJson()}");
if (pm != null &&
pm.status == ServerProcessingMemoryStatus.capturing &&
pm.capturingTo != null &&
pm.capturingTo!.isAfter(now)) {
capturingProcessingMemory = pm;
} else {
capturingProcessingMemory = null;
}
notifyListeners();

// End
if (capturingProcessingMemory == null) {
return;
}

// Or watch
var id = capturingProcessingMemory!.id;
var delayMs = capturingProcessingMemory?.capturingTo != null
? capturingProcessingMemory!.capturingTo!.millisecondsSinceEpoch - now.millisecondsSinceEpoch
: 2 * 60 * 1000; // 2m
if (delayMs > 0) {
_processingMemoryWatchTimer?.cancel();
_processingMemoryWatchTimer = Timer(Duration(milliseconds: delayMs), () async {
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(id: id);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
return;
}
_setCapturingProcessingMemory(result?.result);
});
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

There are a few potential issues in this section:

  1. There is no error handling for the fetchProcessingMemoryServer function call. If it fails due to network issues or server errors, the application might crash or behave unexpectedly.

  2. The timer _processingMemoryWatchTimer is not cancelled before being reassigned, which could lead to memory leaks if it's already running.

  3. The delay calculation does not account for possible negative values. If capturingProcessingMemory!.capturingTo! is earlier than now, delayMs will be negative and the Timer will execute immediately.

Here are some suggestions to address these issues:

    ProcessingMemoryResponse? result;
    try {
      result = await fetchProcessingMemoryServer(
        id: processingMemoryId,
      );
    } catch (e) {
      debugPrint("Error fetching processing memory: $e");
      return;
    }
    if (result?.result == null) {
      debugPrint("Can not fetch processing memory, result null");
      return;
    }
    _setCapturingProcessingMemory(result?.result);

    _processingMemoryWatchTimer?.cancel();

    var delayMs = max(0, capturingProcessingMemory?.capturingTo != null
        ? capturingProcessingMemory!.capturingTo!.millisecondsSinceEpoch - now.millisecondsSinceEpoch
        : 2 * 60 * 1000); // 2m

Comment on lines 435 to 479
}
_socket?.subscribe(this, this);
_transcriptServiceReady = true;

if (segments.isNotEmpty) {
// means that it was a reconnection, so we need to reset
streamStartedAtSecond = null;
secondsMissedOnReconnect = (DateTime.now().difference(firstStreamReceivedAt!).inSeconds);
}
}

Future streamAudioToWs(String id, BleAudioCodec codec) async {
debugPrint('streamAudioToWs in capture_provider');
audioStorage = WavBytesUtil(codec: codec);
if (_bleBytesStream != null) {
_bleBytesStream?.cancel();
}
_bleBytesStream = await _getBleAudioBytesListener(
id,
onAudioBytesReceived: (List<int> value) {
if (value.isEmpty) return;
// audioStorage!.storeFramePacket(value);
// print('audioStorage: ${audioStorage!.frames.length} ${audioStorage!.rawPackets.length}');

final trimmedValue = value.sublist(3);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The removal of audioStorage related code might cause issues if the audio data was being used elsewhere in the application. Ensure that this change doesn't break any functionality that relied on audioStorage.

Comment on lines +767 to +800
// Wait reconnect
if (capturingProcessingMemory == null) {
_clean();
setMemoryCreating(false);
setHasTranscripts(false);
notifyListeners();
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The condition for cleaning up resources has changed. Ensure this new condition (capturingProcessingMemory == null) is the intended behavior and won't cause any unintended side effects.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 4304ee4 and 4a14d31 commits.
Files selected (12)
  • app/lib/backend/http/api/memories.dart (1)
  • app/lib/backend/schema/memory.dart (4)
  • app/lib/pages/memories/page.dart (1)
  • app/lib/pages/memories/widgets/processing_capture.dart (6)
  • app/lib/pages/processing_memories/page.dart (1)
  • app/lib/providers/capture_provider.dart (8)
  • app/lib/providers/memory_provider.dart (4)
  • backend/database/processing_memories.py (2)
  • backend/models/processing_memory.py (2)
  • backend/routers/processing_memories.py (2)
  • backend/routers/transcribe.py (5)
  • backend/utils/processing_memories.py (3)
Review comments generated (52)
  • Review: 37
  • LGTM: 15

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines 309 to 431
),
);
}

// PROCESSING MEMORY

class ProcessingMemoryWidget extends StatefulWidget {
final ServerProcessingMemory memory;

const ProcessingMemoryWidget({
super.key,
required this.memory,
});
@override
State<ProcessingMemoryWidget> createState() => _ProcessingMemoryWidgetState();
}

class _ProcessingMemoryWidgetState extends State<ProcessingMemoryWidget> {
@override
Widget build(BuildContext context) {
return Consumer3<CaptureProvider, DeviceProvider, ConnectivityProvider>(
builder: (context, provider, deviceProvider, connectivityProvider, child) {
return GestureDetector(
onTap: () async {
if (widget.memory.transcriptSegments.isEmpty) return;
routeToPage(
context,
ProcessingMemoryPage(
memory: widget.memory,
));
},
child: Container(
margin: const EdgeInsets.symmetric(horizontal: 16),
width: double.maxFinite,
decoration: BoxDecoration(
color: Colors.grey.shade900,
borderRadius: BorderRadius.circular(16.0),
),
child: Padding(
padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 12),
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
_getMemoryHeader(context),
widget.memory.transcriptSegments.isNotEmpty
? Column(
children: [
const SizedBox(height: 8),
getLiteTranscriptWidget(
widget.memory.transcriptSegments,
[],
null,
),
const SizedBox(height: 8),
],
)
: const SizedBox.shrink(),
],
),
),
));
});
}

_getMemoryHeader(BuildContext context) {
return Padding(
padding: const EdgeInsets.only(left: 0, right: 12),
child: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
Row(
children: [
const SizedBox(
width: 22,
height: 22,
child: CircularProgressIndicator(
valueColor: AlwaysStoppedAnimation<Color>(Colors.white),
),
),
const SizedBox(width: 12),
Container(
decoration: BoxDecoration(
color: Colors.grey.shade800,
borderRadius: BorderRadius.circular(16),
),
padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6),
child: Text(
'Processing...',
style: Theme.of(context).textTheme.bodyMedium!.copyWith(color: Colors.white),
maxLines: 1,
),
),
],
)
],
),
);
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This new widget ProcessingMemoryWidget is introduced to handle the display of processing memories. The implementation looks fine at first glance, but it's crucial to ensure that it handles all possible states of a processing memory correctly. Also, consider adding error handling within this widget to gracefully handle any unexpected situations.

Comment on lines 77 to 106
class ServerProcessingMemory {
final String id;
final DateTime createdAt;
final DateTime? startedAt;
final DateTime? capturingTo;
final ServerProcessingMemoryStatus? status;
final List<TranscriptSegment> transcriptSegments;
final String? memoryId;

ServerProcessingMemory({
required this.id,
required this.createdAt,
this.startedAt,
this.capturingTo,
this.status,
this.transcriptSegments = const [],
this.memoryId,
});

factory ServerProcessingMemory.fromJson(Map<String, dynamic> json) {
return ServerProcessingMemory(
id: json['id'],
createdAt: DateTime.parse(json['created_at']).toLocal(),
startedAt: json['started_at'] != null ? DateTime.parse(json['started_at']).toLocal() : null,
capturingTo: json['capturing_to'] != null ? DateTime.parse(json['capturing_to']).toLocal() : null,
status: json['status'] != null ? ServerProcessingMemoryStatus.valuesFromString(json['status']) : null,
transcriptSegments: ((json['transcript_segments'] ?? []) as List<dynamic>)
.map((segment) => TranscriptSegment.fromJson(segment))
.toList(),
memoryId: json['memoryId'],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The ServerProcessingMemory class has been updated with new fields. It's important to ensure that these new fields are properly initialized and handled throughout the application to avoid null reference errors or unexpected behavior. Also, when parsing dates from JSON, consider error handling for cases where the date string might not be in the expected format.

Comment on lines 152 to 224

// Fetch and watch capturing status
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(
id: processingMemoryId,
);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
}
_setCapturingProcessingMemory(result?.result);

// Set pre-segements
if (capturingProcessingMemory != null && (capturingProcessingMemory?.transcriptSegments ?? []).isNotEmpty) {
segments = capturingProcessingMemory!.transcriptSegments;
setHasTranscripts(segments.isNotEmpty);
}

// Update processing memory
_updateProcessingMemory();
}

void _setCapturingProcessingMemory(ServerProcessingMemory? pm) {
var now = DateTime.now();
debugPrint("${pm?.toJson()}");
if (pm != null &&
pm.status == ServerProcessingMemoryStatus.capturing &&
pm.capturingTo != null &&
pm.capturingTo!.isAfter(now)) {
capturingProcessingMemory = pm;
} else {
capturingProcessingMemory = null;
}
notifyListeners();

// End
if (capturingProcessingMemory == null) {
return;
}

// Or watch
var id = capturingProcessingMemory!.id;
var delayMs = capturingProcessingMemory?.capturingTo != null
? capturingProcessingMemory!.capturingTo!.millisecondsSinceEpoch - now.millisecondsSinceEpoch
: 2 * 60 * 1000; // 2m
if (delayMs > 0) {
_processingMemoryWatchTimer?.cancel();
_processingMemoryWatchTimer = Timer(Duration(milliseconds: delayMs), () async {
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(id: id);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
return;
}
_setCapturingProcessingMemory(result?.result);
});
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

There seems to be a lack of error handling in this section. If the fetchProcessingMemoryServer function fails for any reason (network issues, server errors, etc.), it will throw an exception that is not currently being caught. This could lead to unexpected behavior or crashes. Consider adding a try-catch block around the network request and handle the potential exceptions appropriately.

+    try {
        ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(
          id: processingMemoryId,
        );
+    } catch (e) {
+        debugPrint("Error fetching processing memory: $e");
+        // Handle the error appropriately here
+    }

Comment on lines 89 to 105
Future<List<ServerProcessingMemory>> getProcessingMemories(
{List<String> filterIds = const [], int limit = 5, int offset = 0}) async {
var response = await makeApiCall(
url: '${Env.apiBaseUrl}v1/processing-memories?filter_ids=${filterIds.join(",")}limit=$limit&offset=$offset',
headers: {},
method: 'GET',
body: '');
if (response == null) return [];
if (response.statusCode == 200) {
// decode body bytes to utf8 string and then parse json so as to avoid utf8 char issues
var body = utf8.decode(response.bodyBytes);
var memories =
(jsonDecode(body)["result"] as List<dynamic>).map((memory) => ServerProcessingMemory.fromJson(memory)).toList();
debugPrint('getProcessingMemories length: ${memories.length}');
return memories;
}
return [];
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The getProcessingMemories function is missing error handling for non-200 HTTP status codes. If the server returns an error, this function will simply return an empty list, which could be misleading to the caller. It would be better to throw an exception with a meaningful message in case of an error response from the server.

  if (response.statusCode == 200) {
    // decode body bytes to utf8 string and then parse json so as to avoid utf8 char issues
    var body = utf8.decode(response.bodyBytes);
    var memories =
        (jsonDecode(body)["result"] as List<dynamic>).map((memory) => ServerProcessingMemory.fromJson(memory)).toList();
    debugPrint('getProcessingMemories length: ${memories.length}');
    return memories;
+ } else {
+   throw Exception('Failed to load processing memories: ${response.statusCode} ${response.reasonPhrase}');
  }
  return [];

Future<List<ServerProcessingMemory>> getProcessingMemories(
{List<String> filterIds = const [], int limit = 5, int offset = 0}) async {
var response = await makeApiCall(
url: '${Env.apiBaseUrl}v1/processing-memories?filter_ids=${filterIds.join(",")}limit=$limit&offset=$offset',

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

There seems to be a missing '&' before 'limit' in the URL parameters. This could lead to incorrect API calls.

- url: '${Env.apiBaseUrl}v1/processing-memories?filter_ids=${filterIds.join(",")}limit=$limit&offset=$offset',
+ url: '${Env.apiBaseUrl}v1/processing-memories?filter_ids=${filterIds.join(",")}&limit=$limit&offset=$offset',

Comment on lines 66 to +82
class UpdateProcessingMemoryResponse(BaseModel):
result: BasicProcessingMemory

class DetailProcessingMemoryResponse(BaseModel):
result: DetailProcessingMemory


class DetailProcessingMemoriesResponse(BaseModel):
result: List[DetailProcessingMemory]


class BasicProcessingMemoryResponse(BaseModel):
result: BasicProcessingMemory


class BasicProcessingMemoriesResponse(BaseModel):
result: List[BasicProcessingMemory]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The new response models UpdateProcessingMemoryResponse, DetailProcessingMemoryResponse, DetailProcessingMemoriesResponse, BasicProcessingMemoryResponse, and BasicProcessingMemoriesResponse are well structured and should provide a consistent interface for the frontend to interact with. Good job on this!

Comment on lines +34 to +40
WidgetsBinding.instance.addPostFrameCallback((_) async {
await Navigator.of(context).pushReplacement(MaterialPageRoute(
builder: (c) => MemoryDetailPage(
memory: memory,
),
));
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The use of WidgetsBinding.instance.addPostFrameCallback to navigate to a new page might not be the best approach. This could lead to unexpected behavior if the widget is removed from the tree before the callback is executed. Consider using Future.microtask instead, which will ensure that the navigation happens in the next microtask after the current stack of tasks have completed.

-     WidgetsBinding.instance.addPostFrameCallback((_) async {
-       await Navigator.of(context).pushReplacement(MaterialPageRoute(
-         builder: (c) => MemoryDetailPage(
-           memory: memory,
-         ),
-       ));
-     });
+     Future.microtask(() async {
+       await Navigator.of(context).pushReplacement(
+         MaterialPageRoute(
+           builder: (c) => MemoryDetailPage(
+             memory: memory,
+           ),
+         ),
+       );
+     });

Comment on lines +53 to +57
var memorySource = MemorySource.friend;
var captureProvider = context.read<CaptureProvider>();
if (captureProvider.isGlasses) {
memorySource = MemorySource.openglass;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The memorySource variable is being set based on the value of captureProvider.isGlasses. However, it doesn't account for the case where isGlasses is false. In such a case, memorySource will always be MemorySource.friend, which might not be accurate. Consider adding an else clause to handle this scenario.

-       var memorySource = MemorySource.friend;
-       var captureProvider = context.read<CaptureProvider>();
-       if (captureProvider.isGlasses) {
-         memorySource = MemorySource.openglass;
-       }
+       var captureProvider = context.read<CaptureProvider>();
+       var memorySource = captureProvider.isGlasses ? MemorySource.openglass : MemorySource.friend;

Comment on lines +116 to +124
widget.memory.transcriptSegments.isEmpty
? Column(
children: [
const SizedBox(height: 80),
Center(
child: Text(memorySource == MemorySource.friend ? "No transcript" : "Empty")),
],
)
: getTranscriptWidget(false, widget.memory.transcriptSegments, [], null)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This code block checks if widget.memory.transcriptSegments is empty and displays a "No transcript" or "Empty" message accordingly. However, it doesn't handle the case where getTranscriptWidget returns null. This could potentially lead to runtime errors. You should add a null check before using the result of getTranscriptWidget.

-                           widget.memory.transcriptSegments.isEmpty
-                               ? Column(
-                                   children: [
-                                     const SizedBox(height: 80),
-                                     Center(
-                                         child: Text(memorySource == MemorySource.friend ? "No transcript" : "Empty")),
-                                   ],
-                                 )
-                               : getTranscriptWidget(false, widget.memory.transcriptSegments, [], null)
+                           widget.memory.transcriptSegments.isEmpty
+                               ? Column(
+                                   children: [
+                                     const SizedBox(height: 80),
+                                     Center(
+                                       child: Text(memorySource == MemorySource.friend ? "No transcript" : "Empty"),
+                                     ),
+                                   ],
+                                 )
+                               : getTranscriptWidget(false, widget.memory.transcriptSegments, [], null) ?? Container()

Comment on lines +135 to +137
widget.memory.transcriptSegments.isEmpty ? "No summary" : "Processing",
textAlign: TextAlign.center,
style: const TextStyle(fontSize: 16),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The ternary operator is used to display either "No summary" or "Processing". However, it doesn't account for the case where widget.memory.transcriptSegments is not empty but the processing is still not done. Consider adding a condition to handle this scenario.

-                               child: Text(
-                                 widget.memory.transcriptSegments.isEmpty ? "No summary" : "Processing",
-                                 textAlign: TextAlign.center,
-                                 style: const TextStyle(fontSize: 16),
-                               ),
+                               child: Text(
+                                 widget.memory.transcriptSegments.isEmpty
+                                     ? "No summary"
+                                     : widget.memory.status == ServerProcessingMemoryStatus.done
+                                         ? "Done"
+                                         : "Processing",
+                                 textAlign: TextAlign.center,
+                                 style: const TextStyle(fontSize: 16),
+                               ),

@beastoin beastoin force-pushed the ppejj_processing_memory_ux_868 branch from f4b15bc to 8f761c8 Compare September 28, 2024 08:19
@beastoin beastoin marked this pull request as ready for review September 28, 2024 08:21
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 3306b44 and 8f761c8 commits.
Files selected (20)
  • app/lib/backend/http/api/memories.dart (1)
  • app/lib/backend/http/api/processing_memories.dart (2)
  • app/lib/backend/schema/memory.dart (5)
  • app/lib/backend/schema/message_event.dart (3)
  • app/lib/main.dart (1)
  • app/lib/pages/memories/page.dart (1)
  • app/lib/pages/memories/widgets/capture.dart (1)
  • app/lib/pages/memories/widgets/memory_list_item.dart (1)
  • app/lib/pages/memories/widgets/processing_capture.dart (6)
  • app/lib/pages/processing_memories/page.dart (1)
  • app/lib/providers/capture_provider.dart (10)
  • app/lib/providers/memory_provider.dart (5)
  • app/lib/services/sockets/pure_socket.dart (2)
  • app/lib/services/sockets/transcription_connection.dart (2)
  • backend/database/processing_memories.py (2)
  • backend/models/message_event.py (1)
  • backend/models/processing_memory.py (2)
  • backend/routers/processing_memories.py (2)
  • backend/routers/transcribe.py (5)
  • backend/utils/processing_memories.py (3)
Files skipped from review due to trivial changes (3)
  • app/lib/main.dart
  • app/lib/pages/memories/widgets/capture.dart
  • app/lib/services/sockets/transcription_connection.dart
Review comments generated (63)
  • Review: 36
  • LGTM: 27

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

@@ -49,6 +49,7 @@ class _MemoriesPageState extends State<MemoriesPage> with AutomaticKeepAliveClie
const SliverToBoxAdapter(child: SizedBox(height: 32)),
const SliverToBoxAdapter(child: SpeechProfileCardWidget()),
SliverToBoxAdapter(child: getMemoryCaptureWidget()),
getProcessingMemoriesWidget(memoryProvider.processingMemories),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The getProcessingMemoriesWidget function is being called with an argument, but it's not clear if this function can handle null values. If the processingMemories list in memoryProvider is null, this could potentially lead to a runtime error. Consider adding null safety checks either within the getProcessingMemoriesWidget function or before calling it.

- getProcessingMemoriesWidget(memoryProvider.processingMemories),
+ getProcessingMemoriesWidget(memoryProvider.processingMemories ?? []),

This will ensure that an empty list is passed to the function if processingMemories is null, preventing potential null dereference errors.

Comment on lines 24 to 33
var response = await makeApiCall(
url: '${Env.apiBaseUrl}v1/processing-memories/$id',
headers: {},
method: 'PATCH',
body: jsonEncode(bodyData),
);
if (response == null) return null;
debugPrint('updateProcessingMemoryServer: ${response.body}');
if (response.statusCode == 200) {
return UpdateProcessingMemoryResponse.fromJson(jsonDecode(response.body));
} else {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The makeApiCall function is called with an empty headers object. If this API requires authentication, you should include the necessary authorization headers. Also, error handling seems to be missing for non-200 status codes. It would be beneficial to add some form of error logging or reporting in case the API call fails.

-      headers: {},
+      headers: {
+        'Authorization': 'Bearer $token',
+      },
-    } else {
+    } else {
+      // Log or report error
+      print('Error updating processing memory: ${response.statusCode}');
+    }

Comment on lines +50 to +70
var response = await makeApiCall(
url: '${Env.apiBaseUrl}v1/processing-memories/$id',
headers: {},
method: 'GET',
body: "",
);
if (response == null) return null;
if (response.statusCode == 200) {
return ProcessingMemoryResponse.fromJson(jsonDecode(response.body));
} else {
// TODO: Server returns 304 doesn't recover
CrashReporting.reportHandledCrash(
Exception('Failed to create memory'),
StackTrace.current,
level: NonFatalExceptionLevel.info,
userAttributes: {
'response': response.body,
},
);
}
return null;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Similar to the previous comment, the makeApiCall function is called with an empty headers object. If this API requires authentication, you should include the necessary authorization headers. Additionally, the body parameter for a GET request is typically not used and can be omitted. Lastly, the exception message at line 62 says 'Failed to create memory', which seems incorrect as this function is fetching a memory, not creating one.

-     headers: {},
+     headers: {
+       'Authorization': 'Bearer $token',
+     },
-     body: "",
-       Exception('Failed to create memory'),
+       Exception('Failed to fetch memory'),

Comment on lines 147 to 175
deviceProvider.connectedDevice == null &&
!deviceProvider.isConnecting &&
!isUsingPhoneMic &&
havingCapturingMemory
? Row(
children: [
const Text(
'🎙️',
style: TextStyle(color: Colors.white, fontSize: 22, fontWeight: FontWeight.w600),
),
const SizedBox(width: 12),
Container(
decoration: BoxDecoration(
color: Colors.grey.shade800,
borderRadius: BorderRadius.circular(16),
),
padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6),
child: Text(
'Waiting for reconnect...',
style: Theme.of(context).textTheme.bodyMedium!.copyWith(color: Colors.white),
maxLines: 1,
),
),
],
)
: const SizedBox.shrink(),
// mic
// TODO: improve phone recording UI
deviceProvider.connectedDevice == null && !deviceProvider.isConnecting
deviceProvider.connectedDevice == null && !deviceProvider.isConnecting && !havingCapturingMemory

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The logic here seems to be a bit complex and hard to understand. It might be beneficial to break it down into smaller, more manageable functions or use well-named variables to make the code more readable.

-      deviceProvider.connectedDevice == null &&
-      !deviceProvider.isConnecting &&
-      !isUsingPhoneMic &&
-      havingCapturingMemory
+      bool isDeviceDisconnected = deviceProvider.connectedDevice == null;
+      bool isNotConnecting = !deviceProvider.isConnecting;
+      bool isNotUsingPhoneMic = !isUsingPhoneMic;
+
+      if (isDeviceDisconnected && isNotConnecting && isNotUsingPhoneMic && havingCapturingMemory) {

Comment on lines +309 to +431
),
);
}

// PROCESSING MEMORY

class ProcessingMemoryWidget extends StatefulWidget {
final ServerProcessingMemory memory;

const ProcessingMemoryWidget({
super.key,
required this.memory,
});
@override
State<ProcessingMemoryWidget> createState() => _ProcessingMemoryWidgetState();
}

class _ProcessingMemoryWidgetState extends State<ProcessingMemoryWidget> {
@override
Widget build(BuildContext context) {
return Consumer3<CaptureProvider, DeviceProvider, ConnectivityProvider>(
builder: (context, provider, deviceProvider, connectivityProvider, child) {
return GestureDetector(
onTap: () async {
if (widget.memory.transcriptSegments.isEmpty) return;
routeToPage(
context,
ProcessingMemoryPage(
memory: widget.memory,
));
},
child: Container(
margin: const EdgeInsets.symmetric(horizontal: 16),
width: double.maxFinite,
decoration: BoxDecoration(
color: Colors.grey.shade900,
borderRadius: BorderRadius.circular(16.0),
),
child: Padding(
padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 12),
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
_getMemoryHeader(context),
widget.memory.transcriptSegments.isNotEmpty
? Column(
children: [
const SizedBox(height: 8),
getLiteTranscriptWidget(
widget.memory.transcriptSegments,
[],
null,
),
const SizedBox(height: 8),
],
)
: const SizedBox.shrink(),
],
),
),
));
});
}

_getMemoryHeader(BuildContext context) {
return Padding(
padding: const EdgeInsets.only(left: 0, right: 12),
child: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
Row(
children: [
const SizedBox(
width: 22,
height: 22,
child: CircularProgressIndicator(
valueColor: AlwaysStoppedAnimation<Color>(Colors.white),
),
),
const SizedBox(width: 20),
Container(
decoration: BoxDecoration(
color: Colors.grey.shade800,
borderRadius: BorderRadius.circular(16),
),
padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6),
child: Text(
'Processing',
style: Theme.of(context).textTheme.bodyMedium!.copyWith(color: Colors.white),
maxLines: 1,
),
),
],
)
],
),
);
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This new widget ProcessingMemoryWidget looks good overall. However, there's a potential issue with the getProcessingMemoriesWidget function. If the status of a processing memory changes from 'processing' to 'done', the widget for that memory will not be removed from the list. This could lead to confusion for users as they may see memories in the list that have already been processed. Consider adding a mechanism to remove or update widgets when their corresponding memories' statuses change.

-        if (pm.status == ServerProcessingMemoryStatus.processing) {
-          return ProcessingMemoryWidget(memory: pm);
-        }
-        if (pm.status == ServerProcessingMemoryStatus.done) {
-          return const SizedBox.shrink();
-        }
+        return pm.status == ServerProcessingMemoryStatus.processing 
+            ? ProcessingMemoryWidget(memory: pm)
+            : const SizedBox.shrink();

Comment on lines +61 to +75
enum ServerProcessingMemoryStatus {
capturing('capturing'),
processing('processing'),
done('done'),
unknown('unknown'),
;

final String value;
const ServerProcessingMemoryStatus(this.value);

static ServerProcessingMemoryStatus valuesFromString(String value) {
return ServerProcessingMemoryStatus.values.firstWhereOrNull((e) => e.value == value) ??
ServerProcessingMemoryStatus.unknown;
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The ServerProcessingMemoryStatus enum has been introduced with a method valuesFromString. This method returns unknown if the provided string doesn't match any of the enum values. This could potentially hide errors if an unexpected status value is received from the server. It might be better to throw an error or at least log a warning in such cases.

static ServerProcessingMemoryStatus valuesFromString(String value) {
  return ServerProcessingMemoryStatus.values.firstWhereOrNull((e) => e.value == value) ??
- 73:       ServerProcessingMemoryStatus.unknown;
+ 73:       (throw ArgumentError('Unknown ServerProcessingMemoryStatus: $value'));
}

Comment on lines 77 to +106
class ServerProcessingMemory {
final String id;
final DateTime createdAt;
final DateTime? startedAt;
final DateTime? capturingTo;
final ServerProcessingMemoryStatus? status;
final List<TranscriptSegment> transcriptSegments;
final String? memoryId;

ServerProcessingMemory({
required this.id,
required this.createdAt,
this.startedAt,
this.capturingTo,
this.status,
this.transcriptSegments = const [],
this.memoryId,
});

factory ServerProcessingMemory.fromJson(Map<String, dynamic> json) {
return ServerProcessingMemory(
id: json['id'],
createdAt: DateTime.parse(json['created_at']).toLocal(),
startedAt: json['started_at'] != null ? DateTime.parse(json['started_at']).toLocal() : null,
capturingTo: json['capturing_to'] != null ? DateTime.parse(json['capturing_to']).toLocal() : null,
status: json['status'] != null ? ServerProcessingMemoryStatus.valuesFromString(json['status']) : null,
transcriptSegments: ((json['transcript_segments'] ?? []) as List<dynamic>)
.map((segment) => TranscriptSegment.fromJson(segment))
.toList(),
memoryId: json['memory_id'],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The ServerProcessingMemory class has been updated with new fields capturingTo, status, and memoryId. The fromJson factory method has also been updated accordingly. However, there is no null safety check for json['id'] and json['created_at']. If these fields are not present in the JSON object, the application will crash. Consider adding null checks for these fields.

factory ServerProcessingMemory.fromJson(Map<String, dynamic> json) {
- 97:   return ServerProcessingMemory(
- 98:     id: json['id'],
- 99:     createdAt: DateTime.parse(json['created_at']).toLocal(),
+ 97:   if (json['id'] != null && json['created_at'] != null) {
+ 98:     return ServerProcessingMemory(
+ 99:       id: json['id'],
+ 100:      createdAt: DateTime.parse(json['created_at']).toLocal(),
     startedAt: json['started_at'] != null ? DateTime.parse(json['started_at']).toLocal() : null,
    capturingTo: json['capturing_to'] != null ? DateTime.parse(json['capturing_to']).toLocal() : null,
    status: json['status'] != null ? ServerProcessingMemoryStatus.valuesFromString(json['status']) : null,
    transcriptSegments: ((json['transcript_segments'] ?? []) as List<dynamic>)
        .map((segment) => TranscriptSegment.fromJson(segment))
        .toList(),
    memoryId: json['memory_id'],
+ 107:    );
+ 108:  } else {
+ 109:    throw ArgumentError('Missing required fields in JSON');
+ 110:  }
111:}

Comment on lines +115 to +118
'capturing_to': capturingTo?.toUtc().toIso8601String(),
'status': status.toString(),
'transcript_segments': transcriptSegments.map((segment) => segment.toJson()).toList(),
'memory_id': memoryId,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The toJson method has been updated to include the new fields. However, for the status field, it's directly calling toString() which will return the full name of the enum including the class name (e.g., ServerProcessingMemoryStatus.capturing). Consider using split('.').last to get only the value part of the enum.

- 'status': status.toString(),
+ 'status': status.toString().split('.').last,

Comment on lines +135 to +145
class ProcessingMemoryResponse {
final ServerProcessingMemory? result;

ProcessingMemoryResponse({required this.result});

factory ProcessingMemoryResponse.fromJson(Map<String, dynamic> json) {
return ProcessingMemoryResponse(
result: json['result'] != null ? ServerProcessingMemory.fromJson(json['result']) : null,
);
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

A new class ProcessingMemoryResponse has been introduced with a factory method fromJson. This method assumes that if json['result'] is not null, it can be parsed into a ServerProcessingMemory object. If the structure of json['result'] is not as expected, this could lead to runtime errors. Consider adding error handling or validation here.

factory ProcessingMemoryResponse.fromJson(Map<String, dynamic> json) {
- 141:   return ProcessingMemoryResponse(
- 142:     result: json['result'] != null ? ServerProcessingMemory.fromJson(json['result']) : null,
+ 141:   if (json['result'] != null) {
+ 142:     try {
+ 143:       return ProcessingMemoryResponse(
+ 144:         result: ServerProcessingMemory.fromJson(json['result']),
+ 145:       );
+ 146:     } catch (e) {
+ 147:       throw FormatException('Invalid format for ServerProcessingMemory');
+ 148:     }
+ 149:   } else {
+ 150:     return ProcessingMemoryResponse(result: null);
+ 151:   }
}

Comment on lines +185 to +187
// local label
bool isNew = false;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

A new local label isNew has been added to the ServerMemory class. Ensure that this field is used somewhere in the codebase, otherwise, it's an unnecessary addition which can be removed to keep the code clean and efficient.

- bool isNew = false;

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 8f761c8 and 2b96942 commits.
Files selected (6)
  • app/lib/pages/memories/widgets/memory_list_item.dart (2)
  • app/lib/pages/memories/widgets/processing_capture.dart (6)
  • app/lib/providers/capture_provider.dart (11)
  • app/lib/providers/memory_provider.dart (6)
  • backend/models/message_event.py (1)
  • backend/routers/transcribe.py (6)
Review comments generated (41)
  • Review: 18
  • LGTM: 23

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

@@ -34,6 +34,7 @@ class _MemoryListItemState extends State<MemoryListItem> {
onTap: () async {
MixpanelManager().memoryListItemClicked(widget.memory, widget.memoryIdx);
context.read<MemoryDetailProvider>().updateMemory(widget.memoryIdx, widget.date);
Provider.of<MemoryProvider>(context, listen: false).onMemoryTap(widget.memoryIdx);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The Provider.of<MemoryProvider>(context, listen: false).onMemoryTap(widget.memoryIdx); line is added to handle the memory tap event. However, it's not clear if this method has any side effects or if it returns a Future. If it does, you should consider awaiting it to ensure that any potential exceptions are properly handled and that the operation completes before moving on.

- Provider.of<MemoryProvider>(context, listen: false).onMemoryTap(widget.memoryIdx);
+ await Provider.of<MemoryProvider>(context, listen: false).onMemoryTap(widget.memoryIdx);

Comment on lines +50 to +61
decoration: widget.memory.isNew
? BoxDecoration(
color: Colors.grey.shade900,
borderRadius: BorderRadius.circular(16.0),
border: Border.all(
color: Colors.lightBlue,
width: 1,
))
: BoxDecoration(
color: Colors.grey.shade900,
borderRadius: BorderRadius.circular(16.0),
),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This change introduces a conditional decoration for the widget based on whether the memory is new or not. It seems fine from a logic perspective, but make sure that the UI changes have been thoroughly tested across different screen sizes and resolutions to ensure that they look as expected.

Comment on lines +32 to +42
memory_id: Optional[str] = None

def to_json(self):
j = self.model_dump(mode="json")
j["type"] = self.event_type
return j

class ProcessingMemoryStatusChanged(MessageEvent):
processing_memory_id: Optional[str] = None
processing_memory_status: Optional[str] = None
memory_id: Optional[str] = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The new class ProcessingMemoryStatusChanged inherits from MessageEvent and introduces three optional attributes: processing_memory_id, processing_memory_status, and memory_id. However, the to_json method is not properly indented under this class. This could lead to unexpected behavior or errors as it's currently defined in the global scope instead of being a method of the class. Here's how you can fix it:

class ProcessingMemoryStatusChanged(MessageEvent):
    processing_memory_id: Optional[str] = None
    processing_memory_status: Optional[str] = None
    memory_id: Optional[str] = None
- 
+    def to_json(self):
-    def to_json(self):
         j = self.model_dump(mode="json")

This change ensures that the to_json method is correctly defined within the ProcessingMemoryStatusChanged class.

Comment on lines +42 to +44
bool havingCapturingMemory = provider.capturingProcessingMemory != null;

return (showPhoneMic || isConnected)
return (showPhoneMic || isConnected || havingCapturingMemory)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The new condition havingCapturingMemory has been added to the return statement. This seems to be a logical addition, but it's important to ensure that this doesn't introduce any unexpected behavior in the UI. If the capturing memory is not null, the GestureDetector widget will be returned even if the device is not connected and the phone mic is not being used. Make sure this is the intended behavior.

Comment on lines +147 to +174
deviceProvider.connectedDevice == null &&
!isUsingPhoneMic &&
havingCapturingMemory
? Row(
children: [
const Text(
'🎙️',
style: TextStyle(color: Colors.white, fontSize: 22, fontWeight: FontWeight.w600),
),
const SizedBox(width: 12),
Container(
decoration: BoxDecoration(
color: Colors.grey.shade800,
borderRadius: BorderRadius.circular(16),
),
padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 6),
child: Text(
'Waiting for reconnect...',
style: Theme.of(context).textTheme.bodyMedium!.copyWith(color: Colors.white),
maxLines: 1,
),
),
],
)
: const SizedBox.shrink(),
// mic
// TODO: improve phone recording UI
deviceProvider.connectedDevice == null && !deviceProvider.isConnecting
deviceProvider.connectedDevice == null && !deviceProvider.isConnecting && !havingCapturingMemory

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This block of code introduces a new UI element when deviceProvider.connectedDevice == null && !isUsingPhoneMic && havingCapturingMemory. It's crucial to verify that this UI change aligns with the overall design and user experience of your application. Also, consider extracting this UI logic into a separate function or widget for better readability and maintainability.

Comment on lines +150 to +169
Future<void> _onProcessingMemoryStatusChanged(String processingMemoryId, ServerProcessingMemoryStatus status) async {
if (capturingProcessingMemory == null || capturingProcessingMemory?.id != processingMemoryId) {
debugPrint("Warn: Didn't track processing memory yet $processingMemoryId");
}

ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(id: processingMemoryId);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
return;
}
var pm = result!.result!;
if (status == ServerProcessingMemoryStatus.processing) {
memoryProvider?.onNewProcessingMemory(pm);
return;
}
if (status == ServerProcessingMemoryStatus.done) {
memoryProvider?.onProcessingMemoryDone(pm);
return;
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The _onProcessingMemoryStatusChanged function seems to be handling different statuses of processing memory. However, it doesn't handle all possible cases of ServerProcessingMemoryStatus. It's important to consider what should happen if the status is not processing or done. If there are other possible statuses, you might want to add error handling or logging for those cases.

Comment on lines 171 to 196
Future<void> _onProcessingMemoryCreated(String processingMemoryId) async {
this.processingMemoryId = processingMemoryId;

// Fetch and watch capturing status
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(
id: processingMemoryId,
);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
}
_setCapturingProcessingMemory(result?.result);

// Set pre-segements
if (capturingProcessingMemory != null && (capturingProcessingMemory?.transcriptSegments ?? []).isNotEmpty) {
segments = capturingProcessingMemory!.transcriptSegments;
setHasTranscripts(segments.isNotEmpty);
}

// Notify combining
if (capturingProcessingMemory?.memoryId != null) {
memoryProvider?.onNewCombiningMemory(capturingProcessingMemory!);
}

// Update processing memory
_updateProcessingMemory();
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

In the _onProcessingMemoryCreated function, there's no error handling for when result?.result == null at line 178. This could lead to unexpected behavior if result?.result is null and _setCapturingProcessingMemory(result?.result); is called with a null argument. Consider adding error handling or a return statement in this case.

Comment on lines +198 to +224
void _trackCapturingProcessingMemory() {
if (capturingProcessingMemory == null) {
return;
}

var pm = capturingProcessingMemory!;

var delayMs = pm.capturingTo != null
? pm.capturingTo!.millisecondsSinceEpoch - DateTime.now().millisecondsSinceEpoch
: 2 * 60 * 1000; // 2m
if (delayMs > 0) {
_processingMemoryWatchTimer?.cancel();
_processingMemoryWatchTimer = Timer(Duration(milliseconds: delayMs), () async {
ProcessingMemoryResponse? result = await fetchProcessingMemoryServer(id: pm.id);
if (result?.result == null) {
debugPrint("Can not fetch processing memory, result null");
return;
}

_setCapturingProcessingMemory(result?.result);
if (capturingProcessingMemory == null) {
// Force clean
_clean();
}
});
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

In the _trackCapturingProcessingMemory function, there's no error handling for when result?.result == null at line 212. This could lead to unexpected behavior if result?.result is null and _setCapturingProcessingMemory(result?.result); is called with a null argument. Consider adding error handling or a return statement in this case.

@@ -204,7 +291,7 @@ class CaptureProvider extends ChangeNotifier

// use memory provider to add memory
MixpanelManager().memoryCreated(memory);
memoryProvider?.addMemory(memory);
memoryProvider?.upsertMemory(memory);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The change from addMemory to upsertMemory suggests that the memory object can now be updated if it already exists in the provider. Ensure that this is the intended behavior and that it won't cause any issues elsewhere in your application where addMemory was previously used.

Comment on lines +443 to +456
# Update processing memory status
processing_memory.status = ProcessingMemoryStatus.Processing
processing_memories_db.update_processing_memory_status(
uid, processing_memory.id,
processing_memory.status,
)
# Message: processing memory status changed
ok = await _send_message_event(ProcessingMemoryStatusChanged(
event_type="processing_memory_status_changed",
processing_memory_id=processing_memory.id,
processing_memory_status=processing_memory.status),
)
if not ok:
print("Can not send message event processing_memory_status_changed")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This block of code is updating the status of the processing memory and sending a message event about the status change. This pattern is repeated later in the code as well. To avoid repetition and improve maintainability, consider extracting this into a separate function.

-        # Update processing memory status
-        processing_memory.status = ProcessingMemoryStatus.Processing
-        processing_memories_db.update_processing_memory_status(
-            uid, processing_memory.id,
-            processing_memory.status,
-        )
-        # Message: processing memory status changed
-        ok = await _send_message_event(ProcessingMemoryStatusChanged(
-            event_type="processing_memory_status_changed",
-            processing_memory_id=processing_memory.id,
-            processing_memory_status=processing_memory.status),
-        )
-        if not ok:
-            print("Can not send message event processing_memory_status_changed")
+        update_processing_memory_status_and_notify(
+            uid, 
+            processing_memory, 
+            ProcessingMemoryStatus.Processing
+        )

In the new update_processing_memory_status_and_notify function, you can encapsulate the logic for updating the status and sending a notification.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 2b96942 and 75bd737 commits.
Files selected (2)
  • backend/database/processing_memories.py (2)
  • backend/routers/transcribe.py (7)
Review comments generated (12)
  • Review: 10
  • LGTM: 2

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines 372 to 381
now_ts = time.time()
if last_processing_memory_data:
last_processing_memory = ProcessingMemory(**last_processing_memory_data)
last_segment_end = 0
for segment in last_processing_memory.transcript_segments:
last_segment_end = max(last_segment_end, segment.end)
timer_segment_start = last_processing_memory.timer_segment_start if last_processing_memory.timer_segment_start else last_processing_memory.timer_start
if timer_segment_start + last_segment_end + min_seconds_limit > time.time():
processing_memory = last_processing_memory
if not last_processing_memory.capturing_to or last_processing_memory.capturing_to.timestamp() > now_ts:
last_segment_end = 0
for segment in last_processing_memory.transcript_segments:
last_segment_end = max(last_segment_end, segment.end)
timer_segment_start = last_processing_memory.timer_segment_start if last_processing_memory.timer_segment_start else last_processing_memory.timer_start
if timer_segment_start + last_segment_end + min_seconds_limit > now_ts:
processing_memory = last_processing_memory

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This code block introduces a check for last_processing_memory.capturing_to and compares it with the current timestamp. This seems fine, but ensure that capturing_to is always set before this check to avoid potential NoneType errors.

Comment on lines +393 to +395
# Status, capturing to
processing_memory.status = ProcessingMemoryStatus.Capturing
processing_memory.capturing_to = datetime.fromtimestamp(now_ts + min_seconds_limit, timezone.utc)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The status of processing_memory is being set to ProcessingMemoryStatus.Capturing and capturing_to is being set here. Make sure these changes are saved in the database to reflect the updates.

Comment on lines +40 to +50
def get_processing_memories(uid: str, statuses: [str] = [], filter_ids: [str] = [], limit: int = 5):
processing_memories_ref = (
db.collection('users').document(uid).collection('processing_memories')
)
if len(statuses) > 0:
processing_memories_ref = processing_memories_ref.where(filter=FieldFilter('status', 'in', statuses))
if len(filter_ids) > 0:
processing_memories_ref = processing_memories_ref.where(filter=FieldFilter('id', 'in', filter_ids))
processing_memories_ref = processing_memories_ref.order_by('created_at', direction=firestore.Query.DESCENDING)
processing_memories_ref = processing_memories_ref.limit(limit)
return [doc.to_dict() for doc in processing_memories_ref.stream()]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The way you're using the where method seems incorrect. The where method in Firestore doesn't take a keyword argument named filter. Instead, it takes three positional arguments: field path, operator, and value. Here's how you can fix this:

- processing_memories_ref = processing_memories_ref.where(filter=FieldFilter('status', 'in', statuses))
+ processing_memories_ref = processing_memories_ref.where('status', 'in', statuses)

- processing_memories_ref = processing_memories_ref.where(filter=FieldFilter('id', 'in', filter_ids))
+ processing_memories_ref = processing_memories_ref.where('id', 'in', filter_ids)

Comment on lines +52 to +58
def update_processing_memory_segments(uid: str, id: str, segments: List[dict], capturing_to: datetime):
user_ref = db.collection('users').document(uid)
memory_ref = user_ref.collection('processing_memories').document(id)
memory_ref.update({'transcript_segments': segments})
memory_ref.update({
'transcript_segments': segments,
'capturing_to': capturing_to,
})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

You've added capturing_to as an additional parameter to the update_processing_memory_segments function. This is a good change as it allows updating the capturing time along with the segments.

Comment on lines +60 to +67
def update_basic(uid: str, id: str, geolocation: dict, emotional_feedback: bool, capturing_to: datetime):
user_ref = db.collection('users').document(uid)
memory_ref = user_ref.collection('processing_memories').document(id)
memory_ref.update({
'emotional_feedback': emotional_feedback,
'geolocation':geolocation,
'capturing_to':capturing_to,
})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

You've added capturing_to as an additional parameter to the update_basic function. This is a good change as it allows updating the capturing time along with other basic information.

Comment on lines +69 to +73
def update_processing_memory_status(uid: str, id: str, status: str):
user_ref = db.collection('users').document(uid)
memory_ref = user_ref.collection('processing_memories').document(id)
memory_ref.update({
'status': status,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This new function update_processing_memory_status is a good addition as it allows updating the status of a processing memory separately.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

Commits Files that changed from the base of the PR and between 75bd737 and d0a8e69 commits.
Files selected (1)
  • backend/routers/transcribe.py (7)
Review comments generated (7)
  • Review: 5
  • LGTM: 2

Tips

Chat with Image description Entelligence.AI Bot (@Entelligence.AI)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @Entelligence.AI in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @Entelligence.AI: ignore anywhere in the PR description to pause further reviews from the bot.

Comment on lines +178 to +185

now_ts = time.time()
capturing_to = datetime.fromtimestamp(now_ts + min_seconds_limit, timezone.utc)

processing_memories_db.update_processing_memory_segments(
uid, processing_memory.id, list(map(lambda m: m.dict(), processing_memory.transcript_segments))
uid, processing_memory.id,
list(map(lambda m: m.dict(), processing_memory.transcript_segments)),
capturing_to,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The update_processing_memory_segments function is being called with a new parameter capturing_to. Ensure that this function has been updated to handle this new parameter correctly. Also, consider error handling for the case when the update operation fails.

Comment on lines +372 to +375
now_ts = time.time()
if last_processing_memory_data:
last_processing_memory = ProcessingMemory(**last_processing_memory_data)
last_segment_end = 0
for segment in last_processing_memory.transcript_segments:
last_segment_end = max(last_segment_end, segment.end)
timer_segment_start = last_processing_memory.timer_segment_start if last_processing_memory.timer_segment_start else last_processing_memory.timer_start
if timer_segment_start + last_segment_end + min_seconds_limit > time.time():
if last_processing_memory.capturing_to and last_processing_memory.capturing_to.timestamp() > now_ts:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The logic for determining if processing_memory should be set to last_processing_memory has changed significantly. The new code checks if capturing_to timestamp is greater than the current time. This seems reasonable, but ensure that this change doesn't break any existing functionality that relied on the old logic.

Comment on lines +412 to +414
processing_memory_id=processing_memory.id,
memory_id=processing_memory.memory_id,
),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

The NewProcessingMemoryCreated event now includes memory_id as an additional attribute. Make sure all consumers of this event are updated to handle this new attribute. Also, consider error handling for the case when sending the message event fails.

Comment on lines +439 to +462

now_ts = time.time()
capturing_to = datetime.fromtimestamp(now_ts + min_seconds_limit, timezone.utc)

processing_memories_db.update_processing_memory_segments(
uid, processing_memory.id,
list(map(lambda m: m.dict(), processing_memory.transcript_segments))
list(map(lambda m: m.dict(), processing_memory.transcript_segments)),
capturing_to,
)

# Update processing memory status
processing_memory.status = ProcessingMemoryStatus.Processing
processing_memories_db.update_processing_memory_status(
uid, processing_memory.id,
processing_memory.status,
)
# Message: processing memory status changed
ok = await _send_message_event(ProcessingMemoryStatusChanged(
event_type="processing_memory_status_changed",
processing_memory_id=processing_memory.id,
processing_memory_status=processing_memory.status),
)
if not ok:
print("Can not send message event processing_memory_status_changed")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This block of code introduces several changes including updating processing memory segments with a new capturing_to parameter, updating processing memory status, and sending a new ProcessingMemoryStatusChanged event. Ensure that these operations are handled correctly in their respective functions and that all consumers of the new event are updated to handle it. Also, consider error handling for the case when these operations fail.

Comment on lines +510 to +524
# Update processing memory status
processing_memory.status = ProcessingMemoryStatus.Done
processing_memories_db.update_processing_memory_status(
uid, processing_memory.id,
processing_memory.status,
)
# Message: processing memory status changed
ok = await _send_message_event(ProcessingMemoryStatusChanged(
event_type="processing_memory_status_changed",
processing_memory_id=processing_memory.id,
memory_id=processing_memory.memory_id,
processing_memory_status=processing_memory.status),
)
if not ok:
print("Can not send message event processing_memory_status_changed")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description Entelligence.AI

This block of code updates the processing memory status to Done and sends a ProcessingMemoryStatusChanged event. Ensure that these operations are handled correctly in their respective functions and that all consumers of the new event are updated to handle it. Also, consider error handling for the case when these operations fail.

@beastoin beastoin merged commit 067d2ef into main Sep 28, 2024
1 check passed
@beastoin beastoin deleted the ppejj_processing_memory_ux_868 branch September 28, 2024 12:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant