A Specific Marker Approach to Improve Object Recognition in Bullet Launchers with Computer Vision
Abstract
Computer vision’s ability determines the accuracy of object recognition. This study tested the camera's ability to recognize both passive and active markers using LEDs. A specific active marker is analyzed using blinking on the LED. One of the factors to consider when choosing a specific marker is the value of the duty cycle accuracy. The proposed system is confirmed by implementing an integrated control system and the hardware to develop a specific marker. The result shows that the commercial camera can recognize all colors used as the test markers. Here, a specific marker was improved in the bullet launcher system due to tracking, identifying, detecting, marking, locking, and shooting a target precisely. Generally, image processing obtained the comparison of the time to speed the process, the higher the pixel resolution, the longer the time. When the object moves at a certain speed, the camera can detect several marker shapes, such as circles, squares, and triangles. The result shows that a circle marker gives a higher accuracy at every speed level. In the duty cycle variation test, when the duty cycle value is set to 50%, the best accuracy is obtained when the red LED is used, with the accuracy value obtained reaching 96%. In the LED test, it is also found that the effect of light affects the color detection results on the LED. Moreover, using the highest accuracy results from the LEDs at the implementation stage would be very good.
Downloads
References
A. Samuel, K. Gunadi, and J. Andjarwirawan, “Fitur Pengkategorian Otomatis dari Gambar Berbasis Web dengan Metode SURF dan Haar Cascade Classifiers.,” J. Infra, vol. 5, no. 2, pp. 243–249, 2017.
Silica Kole et al. “SURF and RANSAC: A Conglomerative Approach to Object Recognition.” International Journal of Computer Applications, vol. 109, no.4, pp. 0975-8887, 2015.
Chen, C., Chen, Q., Xu, J., Koltun, V., 2018. Learning to see inthe dark. In: Computer Vision and Pattern Recognition (CVPR),2018 IEEE Conference
E. R. Davies, Computer Vision: Principles, Algorithms, Applications, Learning: Fifth Edition. 2017.
M. Ramalingam, et al.. “A Comprehensive Analysis of Blockchain Applications for Securing Computer Vision Systems,” IEEE Access, 2023, Vol. 11, Page 107309.
S. Sergyán, “Color Content-based Image Classification,” Image (Rochester, N.Y.), pp. 427–434, 2007.
Szabolcs Sergyan, “Color histogram features based image classification in content-based image retrieval systems,” IEEE Xplore, 2008, doi: 10.1109/SAMI.2008.4469170
A. Patil, T. Chaudhari, K. Deo, K. Sonawane, and R. Bora, “Low Light Image Enhancement for Dark Images,” Int. J. Data Sci. Anal., vol. 6, no. 4, p. 99, 2020, doi: 10.11648/j.ijdsa.20200604.11.
Jung, C., Yang, Q., Sun, T., Fu, Q., Song, H., 2017. Low lightimage enhancement with dual-tree complex wavelet transform.J. Vis. Commun. Image Represent. 42, 28–36.
Everingham, M., Eslami, S. A., Van Gool, L., Williams, C. K.,Winn, J., Zisserman, A., 2015. The pascal visual object classeschallenge: A retrospective. Int. J. Comput. Vis. 111, 98–136.
F. Sindy, “Pendeteksian Objek Manusia Secara Realtime Dengan Metode MobileNet-SSD Menggunakan Movidius Neural Stick pada Raspberry Pi,” p. 77, 2019.
B. Micusik and T. Pajdla, “Simultaneous surveillance camera calibration and foot-head homology estimation from human detections,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., no. June, pp. 1562–1569, 2010, doi: 10.1109/CVPR.2010.5539786.
C. Beleznai and Horst Bischof, “Fast Human Detection in Crowded Scenes by Contour Integration and Local Shape Estimation,” IEEE Xplore, pp. 2246-2253, 2009.
H. G. Schantz, “A real-time location system using near-field electromagnetic ranging,” IEEE Antennas Propag. Soc. AP-S Int. Symp., no. July 2007, pp. 3792–3795, 2007, doi: 10.1109/APS.2007.4396365.
H. Adinanta, H. Kato, A. W. S. Putra, and T. Maruyama, “Enhancement of beam tracking response using color filtering method for optical wireless power transmission,” AIP Conf. Proc., vol. 2256, no. September, 2020, doi: 10.1063/5.0022440.
E. F. Schubert, “Light-emitting diodes,” 1993.
G. Leschhorn and R. Young, “Handbook of led and ssl metrology, instrumentation system,” GmbH, 2017.
W. P. Tresna, U. A. Ahmad, R. R. Septiawan, I. T. Sugiarto, and A. Lukmanto, “Encoding LED for Unique Markers on Object Recognition System,” vol. 12, no. 12, pp. 678–683, 2021.
L. Svilainis, “LED directivity measurement in situ,” Meas. J. Int. Meas. Confed., vol. 41, no. 6, pp. 647–654, 2008, doi: 10.1016/j.measurement.2007.09.003.
J. Biswas and M. Veloso, “Depth camera based indoor mobile robot localization and navigation,” Proc. - IEEE Int. Conf. Robot. Autom., pp. 1697–1702, 2012, doi: 10.1109/ICRA.2012.6224766.
Poppinga, N. Vaskevicius, A. Birk, and K. Pathak. Fastplane detection and polygonalization in noisy 3D rangeimages. In IROS 2008
T. K. Gautama, A. Hendrik, and R. Hendaya, “Pengenalan Objek pada Computer Vision dengan Pencocokan Fitur Menggunakan Algoritma SIFT Studi Kasus: Deteksi Penyakit Kulit Sederhana,” J. Tek. Inform. dan Sist. Inf., vol. 2, no. 3, pp. 437–450, 2016, doi: 10.28932/jutisi.v2i3.554.
S. Garg and G. S. Sekhon, “Shape Recognition based on Features matching using Morphological Operations,” vol. 2, no. 4, pp. 2290–2292, 2012.
K. Nakade and S. Wakui, “Modeling of the galvano mirror by lumped mass system and verification for the model through the experiments,” J. Adv. Mech. Des. Syst. Manuf., vol. 12, no. 1, pp. 1–14, 2018, doi: 10.1299/jamdsm.2018jamdsm0032.
S. Brlek, G. Labelle, and A. Lacasse, “The discrete Green Theorem and some applications in discrete geometry,” Theor. Comput. Sci., vol. 346, no. 2–3, pp. 200–225, 2005, doi: 10.1016/j.tcs.2005.08.019.
T. A. T. S, “Perbandingan Model Warna RGB , HSL dan HSV Sebagai Fitur dalam Prediksi Cuaca pada Citra Langit menggunakan.” 2020.
O. D.R. et al., “Light-emitting diodes: A brief review and clinical experience,” J. Clin. Aesthet. Dermatol., no. 6, pp. 36–44, 2015.
H. Adinanta, W. P. Tresna, and E. Kurniawan, “Characterization of Blinking Laser-Target Designator for Target Tracking System.” AIP Conf. Proc., 2652, 020007, 2022.
J. Aulia, Z. Radila, Z.A. Azhary, A.M.T. Nasution, D.Y. Pratama, K. Indriawati, I.T. Sugiarto, W.P. Tresna, “The Bullet Launcher with A Pneumatic System to Detect Objects by Unique Markers.” J. lnf. Commun. Converg. Eng. Vol. 21, no. 3, pp. 252-260, 2023. https://doi.org/10.56977/jicce.2023.21.3.252
N. Mohd Ali, N. K. A. Md Rashid, and Y. M. Mustafah, “Performance comparison between RGB and HSV color segmentations for road signs detection,” Appl. Mech. Mater., vol. 393, no. September 2013, pp. 550–555, 2013, doi: 10.4028/www.scientific.net/AMM.393.550.
Copyright (c) 2024 Umar Ali Ahmad, Wildan Panji Tresna, Iyon Titok Sugiarto, Mera Kartika Delimayanti, Fahmi Charish Mustofa, Mohammad Reza Faisal, Reza Rendian Septiawan

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).