Enhancing Public Safety Situational Awareness Using Edge Intelligence

Authors

  • Pedro Lira Federal University of Rio Grande do Norte
  • Stefano Loss Federal University of Rio Grande do Norte
  • Karine Costa Federal University of Rio Grande do Norte
  • Daniel Araújo Federal University of Rio Grande do Norte
  • Aluizio Rocha Neto Federal University of Rio Grande do Norte
  • Nelio Cacho Federal University of Rio Grande do Norte
  • Thais Batista Federal University of Rio Grande do Norte
  • Everton Cavalcante Federal University of Rio Grande do Norte
  • Frederico Lopes Federal University of Rio Grande do Norte
  • Eduardo Nogueira Federal University of Rio Grande do Norte

DOI:

https://doi.org/10.64552/wipiec.v11i1.88

Keywords:

public safety, situational awareness, edge intelligence, stream analytics

Abstract

Real-time video analytics powered by artificial intelligence (AI) enables public safety agents to effectively perceive and respond to dynamic environments. However, processing large-scale video streams introduces computational and latency challenges. This work presents a framework that combines edge and cloud computing to facilitate efficient AI-based processing of video streams for public safety applications. We evaluated the framework’s performance in a face recognition task by comparing edge and cloud processing. Our initial results demonstrate that edge processing achieves lower total latency compared to cloud processing despite higher inference times, primarily due to reduced transmission overhead. The framework also achieves high accuracy in recognition tasks, though with trade-offs in recall.

References

M. R. Endsley, “Toward a theory of situation awareness in dynamic systems,” Human Factors, vol. 37, no. 1, pp. 32–64, Mar. 1995. DOI: https://doi.org/10.1518/001872095779049543

D. Minoli, A. Koltun, and B. Occhiogrosso, Situational awareness for law enforcement and public safety agencies operating in smart cities – Part 2: Platforms, in S. Rani, V. Sai, and R. Maheswar, Eds. IoT and WSN based smart cities: A machine learning perspective. Switzerland: Springer International Publishing, 2022, pp. 139–162. DOI: https://doi.org/10.1007/978-3-030-84182-9_9

L. U. Khan, I. Yaqoob, N. H. Tran, S. M. A. Kazmi, T. N. Dang, and C. S. Hong, “Edge-computing-enabled smart cities: A comprehensive survey,” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 10 200–10 232, Oct. 2020. DOI: https://doi.org/10.1109/JIOT.2020.2987070

S. Loss et al., “A framework for live situational awareness in stream-based 5G applications,” in 1st IEEE Latin American Conference on Internet of Things. USA: IEEE, 2025, to appear. DOI: https://doi.org/10.1109/LCIoT64881.2025.11118567

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, real-time object detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition. USA: IEEE, 2016, pp. 779–788. DOI: https://doi.org/10.1109/CVPR.2016.91

F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition. USA: IEEE, 2015, pp. 815–823. DOI: https://doi.org/10.1109/CVPR.2015.7298682

Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep Learning Face Attributes in the Wild,” in 2015 IEEE International Conference on Computer Vision. USA: IEEE, 2015, pp. 3730–3738. DOI: https://doi.org/10.1109/ICCV.2015.425

G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, Labeled Faces in the Wild: A database for studying face recognition in unconstrained environments. University of Massachusetts, Amherst, USA, Tech. Rep. 07-49, October 2007.

Downloads

Published

2025-09-02

How to Cite

Lira, P., Loss, S., Costa, K., Araújo, D., Rocha Neto, A., Cacho, N., Batista, T., Cavalcante, E., Lopes, F., & Nogueira, E. (2025). Enhancing Public Safety Situational Awareness Using Edge Intelligence. WiPiEC Journal - Works in Progress in Embedded Computing Journal, 11(1), 4. https://doi.org/10.64552/wipiec.v11i1.88