This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[参考译文] SK-AM69:使用 Yolov7 物体检测模型时的低 FPS

Guru**** 2535150 points
Other Parts Discussed in Thread: AM69A

请注意,本文内容源自机器翻译,可能存在语法或其它翻译错误,仅供参考。如需获取准确内容,请参阅链接中的英语原文或自行翻译。

https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1566538/sk-am69-low-fps-while-using-yolov7-object-detection-model

器件型号:SK-AM69
主题中讨论的其他器件:AM69A

工具/软件:

我部署了编译模型、检测准确、但 FPS 只有 11。 这是预期性能、还是出现问题? 我在 AM69A 处理器上运行、然后使用 Edge AI Tensor Lab 训练了模型。

import onnxruntime as ort
import cv2
import numpy as np
import time

model_path = "/zken/od-yolov7/model/yolov7_l_standalone_kenny_yuv_input.onnx"
video_path='/zken/data/fast.mp4'
artifacts_folder='/zken/od-yolov7/artifacts'
providers=['TIDLExecutionProvider', 'CPUExecutionProvider']
so = ort.SessionOptions()
runtime_options = {
    "artifacts_folder": artifacts_folder,
}
provider_options = [runtime_options, {}]
session = ort.InferenceSession(model_path, providers=providers, provider_options=provider_options, sess_options=so)
print("Active providers:", session.get_providers())

input_name = session.get_inputs()[0].name
output_names = [output.name for output in session.get_outputs()]

cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
    print("Error: Could not open video.")

# FPS calculation variables
frame_count = 0
fps_start_time = time.time()
fps = 0

while True:
    ret, frame = cap.read()
    if not ret:
        break
    
    time_start=time.time()

    input_image=cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    input_image=cv2.resize(input_image, (640, 640)).transpose(2, 0, 1)
    input_image=np.expand_dims(input_image, axis=0).astype(np.float32)
    input_image/=255.0
    outputs=session.run(output_names, {input_name: input_image})
    
    # end_time=time.time()-time_start

    # print(f"Inference time: {end_time:.2f} seconds")
    
    # FPS calculation
    frame_count += 1
    elapsed_time = time.time() - fps_start_time
    
    # Update FPS every second
    if elapsed_time >= 1.0:
        fps = frame_count / elapsed_time
        frame_count = 0
        fps_start_time = time.time()
        print(f"FPS: {fps:.2f}")
    
    # # Display FPS on the frame
    # cv2.putText(frame, f"FPS: {fps:.2f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    
    # # Display the frame
    # cv2.imshow('Video with FPS', frame)
    
    # # Break on 'q' key press
    # if cv2.waitKey(1) & 0xFF == ord('q'):
    #     break

# Release resources
cap.release()
# cv2.destroyAllWindows()

 

  • 请注意,本文内容源自机器翻译,可能存在语法或其它翻译错误,仅供参考。如需获取准确内容,请参阅链接中的英语原文或自行翻译。

    尊敬的 Venk:

    在我们的模型选择工具中、我们使用这些作为 Yolov7 的基准  

    您可以在此处查看相同内容: https://dev.ti.com/edgeaistudio/ 

    其他因素取决于模型配置或输入、DEBUG_LEVEL 是否设置为高于零、或是否有任何自定义层。  

    如果您有任何其他问题、请告知我们。

    此致、

    Christina

  • 请注意,本文内容源自机器翻译,可能存在语法或其它翻译错误,仅供参考。如需获取准确内容,请参阅链接中的英语原文或自行翻译。

    谢谢你


    此致、
    Venkat