guides/security-alarm-system/ #9395
Replies: 32 comments 97 replies
-
Hi! Can I add a confidence on this let's say 90% to display on the box plots and before it sends out an email? Thanks! |
Beta Was this translation helpful? Give feedback.
-
hi, i cant do the email setting, of Sign in with app passwords and the authentication part , it looks like google change the setting i cant find them, is there another way? |
Beta Was this translation helpful? Give feedback.
-
thanks alot. let me try this out
…On Wed, Apr 17, 2024 at 12:03 AM Glenn Jocher ***@***.***> wrote:
Hey there! 🌟
For your Raspberry Pi security project, you're on a great track! You want
to activate a GPIO pin and save a 10-second video when a specific class is
detected. Here’s a concise approach to include GPIO control and video
recording. Make sure you have RPi.GPIO and OpenCV installed.
First, you'll need to import the necessary libraries for GPIO control and
initialize the pin you'll use:
import RPi.GPIO as GPIOGPIO.setmode(GPIO.BCM) # Use BCM GPIO numberingGPIO_pin = 18 # Replace with your GPIO pin numberGPIO.setup(GPIO_pin, GPIO.OUT)
To record a 10-second video, you can use OpenCV's VideoWriter. Here's how
to integrate it with object detection:
import cv2import RPi.GPIO as GPIOfrom picamera2 import Picamera2from ultralytics import YOLO
# Setup GPIOGPIO.setmode(GPIO.BCM)GPIO_pin = 18GPIO.setup(GPIO_pin, GPIO.OUT)
# Initialize PiCamerapicam2 = Picamera2()# Camera setup code here...
# Initialize YOLO modelmodel = YOLO('best (1).pt')
record = Falsevideo_writer = Noneframe_size = (800, 600) # Frame size from your camera setup
while True:
# Capture frame from PiCamera
im = picam2.capture_array()
# Perform object detection with YOLO
results = model(im, conf=0.4)
# Check for your specific class detection
if 'person' in results.names: # Replace 'person' with your interested class
GPIO.output(GPIO_pin, GPIO.HIGH) # Set GPIO pin high
# Start recording if not already
if not record:
video_writer = cv2.VideoWriter('detected_event.avi', cv2.VideoWriter_fourcc(*'XVID'), 10, frame_size)
record = True
start_time = cv2.getTickCount()
# Record video if activated
if record:
video_writer.write(im)
if (cv2.getTickCount() - start_time)/cv2.getTickFrequency() > 10: # Record for 10 seconds
GPIO.output(GPIO_pin, GPIO.LOW) # Reset GPIO pin
video_writer.release() # Stop recording
record = False
# Display image & exit condition here...
# CleanupGPIO.cleanup()
*Notes:*
- Replace 'person' with the class ID/name you're interested in.
- Adjust frame_size based on your camera's configuration.
- This example assumes continuous detection and recording for 10
seconds once the specified class is detected.
This should get you started on enhancing your security project! If you
need further assistance or have questions, feel free to ask. Good luck with
your project! 🚀
—
Reply to this email directly, view it on GitHub
<#9395 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BH3Z3KW4EGMFBBG4SPQT7ULY5WGZZAVCNFSM6AAAAABFN47Q7CVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TCMZVGE4TQ>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Thanks alot. I tried out that project of sending via my email account but
Google had updated its third party policy in that it had banned the
allowance of non secure sources from sending emails. I really don't recall
what the actual statement was but it was similar to that. But was trying
out the one of sending the video to telegram but got tangled in the code
😀😁. I believe assistance here would really be helpful.
This my currently the code
import cv2
from picamera2 import Picamera2
from ultralytics import YOLO
import time
import os
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM) # Use BCM GPIO numbering
GPIO_pin = 25 # Replace with your GPIO pin number
GPIO.setup(GPIO_pin, GPIO.OUT)
# Initialize PiCamera
picam2 = Picamera2()
picam2.preview_configuration.main.size = (800, 600)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()
# Initialize YOLO model
model = YOLO('best (1).pt')
frame_size = (800, 600) # Frame size from your camera setup
detection_count = 0
first_detection_time = 0
second_detection_time = 0
while True:
record = False
video_writer = None
# Capture frame from PiCamera
im = picam2.capture_array()
# Perform object detection with YOLO
results = model(im, conf=0.3)
# After running detection
for result in results:
for box in result.boxes:
# Assuming 'person' class index is 0, adjust based on your model
if box.cls == 0:
# Check for your specific class detection
GPIO.output(GPIO_pin, GPIO.HIGH) # Turn on the buzzer
print("Person detected!")
# Increment detection count and record the time of
detections
if detection_count == 0:
first_detection_time = time.time()
elif detection_count == 1:
second_detection_time = time.time()
detection_count += 1
# Start recording if two detections within 5 seconds
if detection_count == 2 and (second_detection_time -
first_detection_time) <= 5:
record = True
# Record video if activated
if record:
video_name = f'detected_event_{int(time.time())}.avi'
video_writer = cv2.VideoWriter(video_name,
cv2.VideoWriter_fourcc(*'XVID'), 30, frame_size) # Set frame rate to 30 fps
start_time = cv2.getTickCount()
GPIO.output(GPIO_pin, GPIO.HIGH) # Turn on the buzzer
while (cv2.getTickCount() - start_time) / cv2.getTickFrequency() <
5: # Record for 5 seconds
im = picam2.capture_array()
video_writer.write(im)
GPIO.output(GPIO_pin, GPIO.LOW) # Reset GPIO pin
video_writer.release() # Stop recording
print(f"Video {video_name} saved!")
detection_count = 0 # Reset detection count
# Draw bounding boxes on the image
# results.show()
# Display the image with bounding boxes
cv2.imshow("YOLO Object Detection", im)
# Exit if 'q' is pressed
if cv2.waitKey(1) == ord('q'):
break
# Clean up
cv2.destroyAllWindows()
6
But firstly am still here trying to integrate the ultrasonic sensor so that
it can display the distance of the detected object from the sensor together
with the video from the camera. Any assistance offered will be really
helpful. Thanks again
…On Wed, 17 Apr 2024, 20:03 Glenn Jocher, ***@***.***> wrote:
Hey there!
I'm delighted to hear you're giving the provided advice a go! 🌟 If you
run into any hitches or if there's anything else you're curious about,
don't hesitate to reach out.
Just so you know, there's also a comprehensive guide on creating a
Security Alarm System using Ultralytics YOLOv8 that might interest you. It
dives deeper into object detection for security systems, complete with a
code example on how to send email alerts upon detection. Check it out on
our docs page for more nifty details and tips! 🚀
Happy coding, and best of luck with your Raspberry Pi security project!
—
Reply to this email directly, view it on GitHub
<#9395 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BH3Z3KW33CHY5ZMI6YH7WVDY52TPBAVCNFSM6AAAAABFN47Q7CVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TCNBVG43DC>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello there! Can you please share the code snippet of taking the snapshot of the frame when the person is detected and attach the same in the email? |
Beta Was this translation helpful? Give feedback.
-
Hi! I have used both object detection and object tracking module and I have few questions.
|
Beta Was this translation helpful? Give feedback.
-
Hi there. Thanks for the help rendered to me earlier on. It really helped
me alot
Am still working on my code for the security system but I experienced an
error during the drawing of the bounding boxes on the video so I first
commented it out. Below is my code
import cv2
from picamera2 import Picamera2
from ultralytics import YOLO
import time
import os
import RPi.GPIO as GPIO
import requests
import threading
import Adafruit_ADS1x15
import json
# Telegram Bot API token
telegram_token = "TELEGRAM TOKEN"
telegram_chat_id = "CHAT ID"
telegram_url_base = f"https://api.telegram.org/bot{telegram_token}/"
# Initialize the ADC using I2C
bus_number = 1
adc = Adafruit_ADS1x15.ADS1015(busnum=bus_number)
GAIN = 1
# GPIO pin setup
GPIO.setmode(GPIO.BOARD)
GPIO.setwarnings(False)
echo_pin1, trig_pin1 = 36, 37 # North Wing sensor pins
echo_pin2, trig_pin2 = 38, 40 # East Wing sensor pins
buzzer1_pin, buzzer2_pin = 29, 31 # Buzzer pins
# Setup GPIO pins for output/input
GPIO.setup([buzzer1_pin, buzzer2_pin, trig_pin1, trig_pin2], GPIO.OUT)
GPIO.setup([echo_pin1, echo_pin2], GPIO.IN)
# Initialize PiCamera
picam2 = Picamera2()
picam2.preview_configuration.main.size = (800, 600)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()
# Initialize YOLO model
model = YOLO('best (1).pt')
frame_size = (800, 600) # Frame size from your camera setup
detection_count = 0
first_detection_time = 0
second_detection_time = 0
last_video_message_id = None
def ultrasonic_distance(trig_pin, echo_pin):
"""Calculate distance using ultrasonic sensor."""
GPIO.output(trig_pin, True)
time.sleep(0.00001)
GPIO.output(trig_pin, False)
start_time, stop_time = time.time(), time.time()
while GPIO.input(echo_pin) == 0:
start_time = time.time()
while GPIO.input(echo_pin) == 1:
stop_time = time.time()
elapsed_time = stop_time - start_time
distance = (elapsed_time * 34300) / 2
return distance
def send_telegram_message(message):
url = f"https://api.telegram.org/bot{telegram_token}/sendMessage"
params = {"chat_id": telegram_chat_id, "text": message}
requests.post(url, params=params)
def delete_telegram_message(message_id):
url = f"https://api.telegram.org/bot{telegram_token}/deleteMessage"
params = {"chat_id": telegram_chat_id, "message_id": message_id}
requests.post(url, params=params)
def send_video_to_telegram(video_path):
global last_video_message_id
if last_video_message_id:
time.sleep(15)
delete_telegram_message(last_video_message_id)
url = f"https://api.telegram.org/bot{telegram_token}/sendVideo"
files = {'video': open(video_path, 'rb')}
data = {'chat_id': telegram_chat_id}
response = requests.post(url, files=files, data=data).json()
last_video_message_id = response.get('result', {}).get('message_id')
def record_video():
video_name = f'detected_event_{int(time.time())}.avi'
video_path = os.path.join(os.getcwd(), video_name)
video_writer = cv2.VideoWriter(video_path,
cv2.VideoWriter_fourcc(*'XVID'), 30, frame_size) # Set frame rate to 30 fps
start_time = cv2.getTickCount()
while (cv2.getTickCount() - start_time) / cv2.getTickFrequency() < 5: #
Record for 5 seconds
im = picam2.capture_array()
video_writer.write(im)
video_writer.release() # Stop recording
send_telegram_message(f"Video of intrusion saved. Please check.")
send_video_to_telegram(video_path)
def handle_buzzers(distance1, distance2):
"""Activate buzzers based on distance readings."""
if distance1 < distance2:
intrusion_side = "North Wing"
else:
intrusion_side = "East Wing"
if distance1 <= 7 or distance2 <= 7:
GPIO.output(buzzer1_pin, GPIO.HIGH)
GPIO.output(buzzer2_pin, GPIO.HIGH)
print(f"Intrusion detected on both sides, very close in the
{intrusion_side}.")
time.sleep(20)
GPIO.output(buzzer1_pin, GPIO.LOW)
GPIO.output(buzzer2_pin, GPIO.LOW)
elif distance1 <= 15 or distance2 <= 15:
GPIO.output(buzzer1_pin, GPIO.HIGH)
print(f"Intrusion detected in the {intrusion_side}.")
time.sleep(20)
GPIO.output(buzzer1_pin, GPIO.LOW)
def turn_off_buzzer(buzzer_pin):
GPIO.output(buzzer_pin, GPIO.LOW)
send_telegram_message(f"Buzzer connected to pin {buzzer_pin} has been
turned off.")
def check_telegram_commands():
url = telegram_url_base + "getUpdates"
response = requests.get(url)
data = response.json() # Parse JSON response
# Check if the "result" key exists in the response
if "result" in data:
messages = data["result"]
if messages:
for message in messages:
try:
text = message["message"]["text"]
if text == "off1":
turn_off_buzzer(buzzer1_pin)
elif text == "off2":
turn_off_buzzer(buzzer2_pin)
# Update offset to only receive new messages later
last_update_id = message["update_id"] + 1
requests.get(url, params={"offset": last_update_id})
except KeyError:
continue
while True:
check_thread = threading.Thread(target=check_telegram_commands) #
Regularly check for new commands
check_thread.start()
record = False
resistance = adc.read_adc(0, gain=GAIN)
if resistance > 750:
distance1 = ultrasonic_distance(trig_pin1, echo_pin1)
distance2 = ultrasonic_distance(trig_pin2, echo_pin2)
print(f"North Wing Distance: {distance1:.2f} cm, East Wing
Distance: {distance2:.2f} cm")
# Capture frame from PiCamera
im = picam2.capture_array()
# Perform object detection with YOLO
results = model(im, conf=0.4)
# After running detection
for result in results:
num_people_detected = 0
for box in result.boxes:
# Assuming 'person' class index is 0, adjust based on your
model
if box.cls == 0:
threading.Thread(target=handle_buzzers,
args=(distance1, distance2)).start()
num_people_detected += 1
print("Person detected within range!")
if num_people_detected > 0:
people_message = f"{num_people_detected} {'person' if
num_people_detected == 1 else 'people'} detected within {distance1} cm."
send_telegram_message(people_message + f" Distance left to
reach the house: {distance1} cm.")
# Start recording if two detections within 5 seconds
if detection_count == 0:
first_detection_time = time.time()
elif detection_count == 1:
second_detection_time = time.time()
detection_count += num_people_detected
if detection_count >= 1 and (second_detection_time -
first_detection_time) <= 5:
record = True
# Record video if activated
if record:
threading.Thread(target=record_video).start()
detection_count = 0 # Reset detection count
# Draw bounding boxes on the image
# results.show()
# Display the image with bounding boxes
cv2.imshow("YOLO Object Detection", im)
# Exit if 'q' is pressed
if cv2.waitKey(1) == ord('q'):
break
# Clean up
GPIO.cleanup()
cv2.destroyAllWindows()
The code works in this format it
gets readings from the ldr, incase below threshold, it activates the
camera to carry out object detection, incase the class is detected it
activates the buzzers according to the activation distance of each buzzer,
and then sends the recorded video plus distance left to reach the house
from the different ultrasonic readings and I wanted some help so that the
distance is labelled like intruder found in the north wing, distance left
to reach the house..... and then like that for also the east wing to the
specified telegram bot but the diatance shouldbe sent if its less than
15cm, and then deletes the sent video after 15sec by replacing it with the
new recorded video, plus I wanted some help so that the system is always
listening for incoming commands from the telegram bot to turn off specified
buzzers.
Also wanted to ask for the best way of applying threading to the code
because the previous ways I tried where sending a video with an
unrecognised format.
I also wanted to enable live streaming to the telegram bot but failed. Any
help rendered to me will be priceless.
Thank you
|
Beta Was this translation helpful? Give feedback.
-
Thanks alot. It really worked out well. But I happened to face a challenge,
i tried to integrate the bounding boxes onto the video that was being sent
but then it seemed as if the video that was being sent was just an image
with 0sec. Any help rendered would be appreciated.
Secondly am inquiring on how I can make this program run on start. I am
using VS Code to run it but when I try running the program using crontab,
it fails to identify the model yet both are in the same folder. Thank you
…On Tue, 14 May 2024, 00:12 Paula Derrenger, ***@***.***> wrote:
Hello! I'm glad the previous assistance was helpful. For drawing bounding
boxes on the video using your system, you seem to have most components set
up correctly. However, to help you further with your requirements, here are
some pointers:
1.
*Bounding Boxes Drawing:*
If the results.show() method is unfavorably commented out, ensure that
the frames you get from the picam2.capture_array() call are in the
correct format. Then, you can overlay the bounding boxes using this snippet
right before your cv2.imshow():
for result in results:
if result.boxes:
annotated_image = result.plot() # Adds boxes to the image
cv2.imshow("YOLO Object Detection", annotated_image)
2.
*Distance Labeling and Telegram Messaging:*
Modify the send_telegram_message function to include the distance
based on your needs. It sounds like you want to append distance information
per location:
def send_telegram_message_with_distance(message, distance, location):
final_message = f"{message} Intruder found in the {location}, distance left to reach the house: {distance} cm."
params = {"chat_id": telegram_chat_id, "text": final_message}
requests.post(telegram_url_base + "sendMessage", params=params)
Use this function and pass the respective distance and location when
calling within your detection conditions.
3.
*Threading for Real-time Commands:*
For managing threading and ensuring video format integrity, make sure
to set the correct codec and format when initializing cv2.VideoWriter.
fourcc = cv2.VideoWriter_fourcc(*'XVID')video_writer = cv2.VideoWriter(video_path, fourcc, 20.0, (640, 480))
4.
*Live Streaming to Telegram:*
Live streaming directly to Telegram is complex because Telegram
doesn't support real-time video streaming. Instead, consider using a
different platform that supports streaming, like YouTube Live, and then
share the link through Telegram.
For threaded operations, ensure each thread handles a distinct task to
prevent resource clashes, and always synchronize shared resources
appropriately.
If you encounter further issues or specific errors, do share the error
messages or symptoms to better address the problems!
Hope this helps! 🚀
—
Reply to this email directly, view it on GitHub
<#9395 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BH3Z3KSOUG54AI5TLXIX2GLZCEUDPAVCNFSM6AAAAABFN47Q7CVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TIMRWGA3DM>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
thank you. but ive failed to integrate them . this is my code.
def record_video():
video_name = f'detected_event_{int(time.time())}.avi'
video_path = os.path.join(os.getcwd(), video_name)
# Initialize video writer
fourcc = cv2.VideoWriter_fourcc(*'XVID')
video_writer = cv2.VideoWriter(video_path, fourcc, 20.0, frame_size)
start_time = time.time()
while (time.time() - start_time) < 5: # Record for 5 seconds
im = picam2.capture_array()
# Perform object detection and annotate frame
results = model(im, conf=0.4)
annotated_image = im.copy()
for result in results:
for box in result.boxes:
annotated_image = result.plot(annotated_image) # Adds boxes to the image
video_writer.write(annotated_image) # Save frame to video
cv2.imshow("YOLO Object Detection", annotated_image)
cv2.waitKey(1) # Display the frame
video_writer.release() # Stop recording
cv2.destroyAllWindows()
send_telegram_message("Video of intrusion saved. Please check.")
send_video_to_telegram(video_path)
…On Tue, May 21, 2024 at 5:22 PM Glenn Jocher ***@***.***> wrote:
Hello! It sounds like you're making great progress with your project.
Let's address your concerns:
1.
*Video as an Image Issue*: It seems the video might not be encoded
correctly when being saved. Ensure you're using cv2.VideoWriter to
save the video and set the correct codec and frame size. Here’s a quick
example:
fourcc = cv2.VideoWriter_fourcc(*'XVID')out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640, 480))while True:
ret, frame = cap.read()
if ret:
# your detection code
out.write(frame) # save frame to video
else:
breakout.release()
2.
*Auto-Start on System Boot*: For running scripts at startup with
crontab, ensure your script uses absolute paths for the model and any
other files it accesses. Also, specify the full path to the Python
interpreter. Here’s an example crontab entry:
@mimnsam /usr/bin/python3 /home/user/my_script.py
Make sure your Python script and all related files (like the model file)
are accessible from the path where the script runs, and that all necessary
environment variables are set correctly in the script or sourced in the
crontab.
If you need more detailed guidance, feel free to check out our Security
Alarm System guide
<https://docs.ultralytics.com/guides/security-alarm-system/> for more
insights on setting up projects with YOLOv8. Keep up the great work! 🚀
—
Reply to this email directly, view it on GitHub
<#9395 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BH3Z3KQGDXNRWMMVDKNDO5LZDNKDJAVCNFSM6AAAAABFN47Q7CVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TKMBZGI3TS>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
thanks for the help but i experienced this error
Exception in thread Thread-15 (record_video):
Traceback (most recent call last):
File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
self.run()
File "/usr/lib/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/home/Abdul/project/annotated video.py", line 125, in record_video
annotated_image = results.render()[0] # This will render the
detections directly onto the image
^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'render'
…On Wed, May 22, 2024 at 1:23 AM Paula Derrenger ***@***.***> wrote:
@mimnsam <https://github.com/mimnsam> hello! It looks like you're almost
there with integrating the video recording and object detection. I noticed
a small issue in your code where you're trying to plot bounding boxes. You
should update the plot_bboxes method to handle the results correctly.
Here's a corrected snippet for your loop:
# Perform object detection and annotate frameresults = model(im, conf=0.4)annotated_image = results.render()[0] # This will render the detections directly onto the image
video_writer.write(annotated_image) # Save frame to videocv2.imshow("YOLO Object Detection", annotated_image)cv2.waitKey(1) # Display the frame
Make sure that model is properly loaded with the YOLO model and
configured before this loop. This adjustment should help you integrate the
detection results directly into your video stream correctly. If you need
further assistance, feel free to ask! 🚀
—
Reply to this email directly, view it on GitHub
<#9395 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BH3Z3KRE7YCZHSTKH6AWMXTZDPCPJAVCNFSM6AAAAABFN47Q7CVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TKMJUGI2TQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello, import numpy as np import time #from time import time import supervision as sv import smtplib from email_settings import password, from_email, to_email create Serverserver = smtplib.SMTP('smtp.gmail.com: 587') ##login credentials for sending the mail server.login(from_email, password) def send_email(to_email, from_email, people_detected = 1): def record_video(cap, video_writer, duration=15): class ObjectDetection:
detector = ObjectDetection(capture_index=0) |
Beta Was this translation helpful? Give feedback.
-
i would like to do detection of unattended baggage in a public area eg. on a train platform. Upon detection to send email alert to the control center. My problem is that an unattended baggage could be any object class. The question is : how do we decide if an item is considered "unattended" ? based on idle time in a region ? |
Beta Was this translation helpful? Give feedback.
-
Hi! How do I modify the program to display the detected category names in my email? |
Beta Was this translation helpful? Give feedback.
-
Hey there Yolov Staff! 1- when I run yolov models for inference and prediction, the models detect all objects it sees through my laptop camera, even though the default object of Security Alarm System is 0, which is person. So, how can i make the model predict and sees persons only, not all objected it was trained on? 2- The model uses my laptop camera instead of My Webcam, how can I change that? Who can help me, pls ? Sincerely yours! |
Beta Was this translation helpful? Give feedback.
-
To: The respectful Staff of ULTRALYTICS I have two questions 1 - Can I use multiple external cameras on one Laptop or one PC to detect objects using the below code: detector = ObjectDetection(capture_index=1) 2- How to let yolov9 detect more than one classs, example: person,cat and vehicle, using the below code: Any from the Staff or others would be appreciated! |
Beta Was this translation helpful? Give feedback.
-
To: Hello The Respectful Staff ! How can I know a capture_index number of a camera or multiple cameras in windows Laptop or PC ?? Any help either from the Respectful Staff or others would be so appreciated !! |
Beta Was this translation helpful? Give feedback.
-
Hello there the Respectful Staff! detector1 = ObjectDetection(capture_index=0) I have my laptop camera and my Webcam in my Laptop. Any help would be so appreciated!! |
Beta Was this translation helpful? Give feedback.
-
Hey, the Respectful Staff! Does yolov Security alarm webcam System consumes laptop/pc RAM overtime? Does Yolo Security alarm webcam system exhausts laptop or PC's Computer Processor Unit (cpu) overtime? Any help either from the Respectful Staff or others would be so appreciated!! |
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
-
Hey! How to integrate Digital video recorder(Surveillance Camera) with my Laptop while using yolov8 instead of webcam? Best regards! |
Beta Was this translation helpful? Give feedback.
-
Hello there, Glenn-jocher! How to integrate Digital video recorder(Surveillance Camera) with my Laptop while using yolov8 instead of webcam? Any help would be appreciated! |
Beta Was this translation helpful? Give feedback.
-
Hey there, the RespectfulStaff ! Do all anaconda pre-installed and installed liberaries need manual updates? Or they are automatically updated? Any help would be appreciated! Sincerely Yours! |
Beta Was this translation helpful? Give feedback.
-
Hello there, the Respectful Staff! Thanks to yolov Security alarm system, how can I receive an IMAGE of the detected person in my Gmail? Any help would be so appreciated! Sincerely Yours! |
Beta Was this translation helpful? Give feedback.
-
Hey! How can I modify the below email function of Security Alarm System code so as to attach the IMAGE of the detected person/object via Gmail? Would you please give an example? from email.mime.multipart import MIMEMultipart def send_email(to_email, from_email, object_detected=1):
|
Beta Was this translation helpful? Give feedback.
-
Hey there, the Respectful Staff! when I tried to send an IMAGE of the detected person via Security Aalarm System to my Gmail using the below code: from email.mime.multipart import MIMEMultipart def send_email(to_email, from_email, image_path, object_detected=1):
an error Message was raised, What should I do? Best Regards! |
Beta Was this translation helpful? Give feedback.
-
Hey! Suppose you have yolov11 Security alarm system on your Laptop and your Webcam detected a stranger or thief inside your house, can the system sends an image of the detected thief or stranger with the text message to your Gmail? Or is it able to only send a text message? If the security alarm system can send an image of the detected stranger or thief with the message, please let me know how( with example). The below is yolov11 Security Alarm System code: from email.mime.multipart import MIMEMultipart def send_email(to_email, from_email, object_detected=1):
Any help would be too appreciated!! |
Beta Was this translation helpful? Give feedback.
-
Hello there! detector = ObjectDetection(capture_index=0) if yes, how ? please give an example! |
Beta Was this translation helpful? Give feedback.
-
Hey! Is there a code or way to detect a compatible capture index or URL of a DVR/NVR camera stream ? Would you please give an example ? |
Beta Was this translation helpful? Give feedback.
-
hey! Yesterday, my friend disclosed his camera surveillance info to me, so as my yolov11 Security Alarm System can detect it. He showed me the two below info: His camera IP address : 192xxxxxxxxxx His Camera code: m53xxxxxxxxx But none of the above have been detected by the function below: detector = ObjectDetection(capture_index=0) So would you please show us an example of the normal format of a DVR/NVR camera stream index so as to help us find a correct index of any DVR/NVR ? Best Regards!! |
Beta Was this translation helpful? Give feedback.
-
guides/security-alarm-system/
Security Alarm System Project Using Ultralytics YOLOv8. Learn How to implement a Security Alarm System Using ultralytics YOLOv8
https://docs.ultralytics.com/guides/security-alarm-system/
Beta Was this translation helpful? Give feedback.
All reactions