Using Deep Learning and Computer Vision for Beef Cattle Behavior Recognition

In an article published in the journal Animals, researchers from China proposed an innovative model based on computer vision and deep learning to recognize the behaviors of beef cattle in various scenarios.

Study: Using Deep Learning and Computer Vision for Beef Cattle Behavior Recognition. Image credit: William Edge/Shutterstock
Study: Using Deep Learning and Computer Vision for Beef Cattle Behavior Recognition. Image credit: William Edge/Shutterstock

They established a more comprehensive dataset for beef cattle posture and enhanced the model’s recognition performance by improving the convolution modules and incorporating attention modules based on the you only look once 8 (YOLOv8) algorithms. Moreover, they discussed that the novel approach could accurately recognize the real-time habits of beef cattle in complex scenarios such as strong light, low light, moderate density, and high density.


Beef cattle are one of the most important species which provide meat, milk, leather, and other products for human consumption and use. Their behavior reflects physiological and psychological status and can be used as an indicator of health, productivity, and welfare. Therefore, recognizing and analyzing the behavior of beef cattle is essential for early warning of disease assessment and anomaly monitoring in cattle farming.

Traditional behavior recognition methods rely on wearable sensors or manual observation, which have drawbacks such as high cost, low efficiency, invasiveness, and subjectivity. With the development of computer vision and deep learning, non-contact methods based on image or video analysis have emerged as a promising alternative. These methods can automatically detect and classify the behavior of beef cattle without disturbing their natural activities. However, existing approaches face challenges, such as the complexity and variability of beef cattle behavior, the interference of background noise, and the trade-off between speed and accuracy.

About the Research

In the present paper, the authors designed a deep learning and computer vision-based technique to effectively recognize nine different behaviors of beef cattle in real-time and non-intrusive ways. Their method uses the latest single-stage object detection algorithm called YOLOv8. This algorithm integrates and optimizes the essence of the YOLO series algorithms. The new approach improved the YOLOv8 algorithm by introducing the dynamic snake convolution (DSConv) module and the BiFormer attention mechanism.

The DSConv module is inspired by deformable convolution, which adapts the convolutional kernel's shape during feature learning to focus on the essential structural features of beef cattle behavior. However, unlike the deformable convolution that learns deformation offsets randomly, the DSConv module uses an iterative strategy. This strategy involves sequentially matching each target to observable positions, ensuring continuous feature attention without excessively spreading the perception area due to large deformation offsets.

The BiFormer attention mechanism is a lightweight vision transformer. It uses a two-layer routing attention (BRA) module to filter out weakly correlated tokens and retain a small number of candidate routing areas. The BRA module applies attention mechanisms at the token level to obtain pixel-level focused regions that capture the global and local feature correlations and facilitate more efficient information fusion. The BiFormer attention mechanism improves the algorithm's ability to capture long-distance context dependencies and enhances the model's computational efficiency.

The study constructed a new dataset of beef cattle behavior, which covers nine distinct behaviors, including lying, standing, mounting, licking, fighting, eating, walking, drinking, and searching. The dataset comprises 34,560 samples obtained by collecting and filtering videos of 45 beef cattle using a fixed camera and a mobile phone. The dataset undergoes data augmentation to enhance the diversity and robustness of the data. Additionally, the dataset was divided into training and testing sets in the ratio of 8:2.

Furthermore, the authors evaluated the performance of their method on the beef cattle behavior dataset using three metrics: average precision at intersection over union (IoU) 50 (AP50), accuracy, and average precision at IoU 50:95 (AP50:95). They compared the developed model with several existing state of the art methods, such as YOLOv5, YOLOv6, YOLOv7, and YOLOv8. Moreover, they conducted ablation studies to analyze the effects of the dynamic snake-shaped convolution module and the BiFormer attention mechanism on the model performance.

Research Findings

The outcomes showed that the presented method achieved the best results among all the compared algorithms in terms of accuracy, AP50:95 and AP50. The accuracy of beef cattle behavior recognition reaches 93.6%, with the AP50:95 and AP50 of 71.5% and 96.5%, respectively. This represents a 5.2%, 5.3%, and 7.1% enhancement over the original YOLOv8n.

Significantly, the average recognition accuracy of the lying posture of beef cattle reaches 98.9%. The introduced approach demonstrated high robustness and adaptability in complex scenarios such as varying lighting conditions and cattle densities. It maintained a high accuracy of 92.3% in strong light, 91.8% in low light, 94.5% in moderate density, 92.6% in high density, and 93.4% in sparse density.

Moreover, it also highlighted good performance in different angles of cattle behavior by reaching an accuracy of 93.2% in the front view, 93.8% inside view, and 93.5% in the back view. The results indicated that the proposed system can effectively extract and fuse the key features of beef cattle behavior and achieve high-performance behavior recognition in real-time and non-intrusive ways.


In summary, the novel model is adaptive, effective, efficient, and robust for recognizing the behaviors of beef cattle. This approach achieved a high accuracy of 93.6% in recognizing beef cattle behavior and outperformed existing methods in providing theoretical and practical support for intelligent and welfare-oriented cattle farming.

The researchers acknowledged the limitations and challenges of their research and suggested some directions for future work, such as extending the dataset to include more behaviors and scenarios of beef cattle, exploring more efficient and lightweight attention mechanisms, and applying the method to other livestock species and animal behavior recognition tasks.

Journal reference:
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.


Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, February 01). Using Deep Learning and Computer Vision for Beef Cattle Behavior Recognition. AZoAi. Retrieved on April 16, 2024 from

  • MLA

    Osama, Muhammad. "Using Deep Learning and Computer Vision for Beef Cattle Behavior Recognition". AZoAi. 16 April 2024. <>.

  • Chicago

    Osama, Muhammad. "Using Deep Learning and Computer Vision for Beef Cattle Behavior Recognition". AZoAi. (accessed April 16, 2024).

  • Harvard

    Osama, Muhammad. 2024. Using Deep Learning and Computer Vision for Beef Cattle Behavior Recognition. AZoAi, viewed 16 April 2024,


The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Reveals Neural Hierarchy in C. elegans Mating