MTCNN์„ ์‚ฌ์šฉํ•˜๋ฉด ์ด๋ฏธ์ง€๋‚˜ ์˜์ƒ๋ฐ์ดํ„ฐ์˜ ์–ผ๊ตด ์ธ์‹์„ ๊ฐ„๋‹จํ•œ ์ฝ”๋“œ๋กœ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค.

์ด์ „ ๊ฒŒ์‹œ๋ฌผ์—์„œ๋Š” ๋‚˜์ด/์„ฑ๋ณ„ ์˜ˆ์ธก์„ ์œ„ํ•ด ํ›ˆ๋ จ๋œ CAFFE ๋ชจ๋ธ๊ณผ Cascade ๋“ฑ์„ ์‚ดํŽด๋ณด์•˜์ง€๋งŒ, 

๋‹จ์ˆœํžˆ ์–ผ๊ตด์ธ์‹๋งŒ ์œ„ํ•ด์„œ๋Š” MTCNN์ด ๋ณด๋‹ค ์‚ฌ์šฉํ•˜๊ธฐ ๊ฐ„ํŽธํ•˜๊ณ , ์–ผ๊ตด์ธ์‹ ์„ฑ๋Šฅ๋„ ๋›ฐ์–ด๋‚˜๋‹ค.

์ด๋ฒˆ ์ฝ”๋“œ ๊ตฌํ˜„์—์„œ๋Š” ์˜์ƒ๋ฐ์ดํ„ฐ์—์„œ ์—ฌ๋Ÿฌ ์–ผ๊ตด์„ ์ธ์‹ํ•˜๋Š” ๊ฒŒ ์ฃผ์š”๋ชฉ์ ์ด์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ๊ธฐ์กด ์ฝ”๋“œ์— ๋ฐ˜๋ณต๋ฌธ์„ ์ถ”๊ฐ€ํ•ด๋ณด์•˜๋‹ค.

 

1. MTCNN ์„ค์น˜ ๋ฐ import

# ์ฝ”๋žฉ์ด๋ฉด ์ฝ”๋žฉ ์ฝ”๋“œ์—, ์ฃผํ”ผํ„ฐ๋ฉด ์ฃผํ”ผํ„ฐ cmd์ฐฝ์— ์„ค์น˜
pip install mtcnn 
import mtcnn

# ์ด๋ฏธ์ง€ ํ™•์ธ ๋“ฑ์„ ์œ„ํ•ด matplotlib๋„ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•˜๋‹ค.
from matplotlib import pyplot

2. ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋กœ MTCNN ์–ผ๊ตด์˜์—ญ ๊ธฐ๋Šฅ ํ™•์ธํ•˜๊ธฐ

# load image from file
pixels = pyplot.imread('/content/sampley.jpeg')
# example of face detection with mtcnn
from matplotlib import pyplot
from PIL import Image
from numpy import asarray
from mtcnn.mtcnn import MTCNN
 
# extract a single face from a given photograph
def extract_face(filename, required_size=(224, 224)):
	# load image from file
	pixels = pyplot.imread(filename)
	# create the detector, using default weights
	detector = MTCNN()
	# detect faces in the image
	results = detector.detect_faces(pixels)
	# extract the bounding box from the first face
	x1, y1, width, height = results[0]['box']
	x2, y2 = x1 + width, y1 + height
	# extract the face
	face = pixels[y1:y2, x1:x2]
	# resize pixels to the model size
	image = Image.fromarray(face)
	image = image.resize(required_size)
	face_array = asarray(image)
	return face_array
 
# load the photo and extract the face
pixels = extract_face('/content/sampley.jpeg')
# plot the extracted face
pyplot.imshow(pixels)
# show the plot
pyplot.show()

์ฝ”๋“œ ๊ฒฐ๊ณผํ™”๋ฉด

3. MTCNN์œผ๋กœ ์ฃผ๋ณ€ ์˜์—ญ ์œ„์น˜๊ฐ’ ํ™•๋ณดํ•˜๊ธฐ

pixels = pyplot.imread('/content/sampley.jpeg')
	# create the detector, using default weights
detector = MTCNN()
	# detect faces in the image
results = detector.detect_faces(pixels)
print(results)

๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ถœ๋ ฅ๋œ๋‹ค.

[{'box': [104, 70, 143, 179], 'confidence': 0.9999216794967651, 'keypoints': {'left_eye': (148, 145), 'right_eye': (211, 136), 'nose': (195, 167), 'mouth_left': (172, 210), 'mouth_right': (219, 202)}}]

 

x1, y1, width, height = results[0]['box']
x2, y2 = x1 + width, y1 + height
	# extract the face
face2 = pixels[y1-30:y2+30, x1-30:x2+30]
# resize pixels to the model size
image = Image.fromarray(face2)
image = image.resize((254,254))
face_array = asarray(image)
pyplot.imshow(image)
# show the plot
pyplot.show()

face2 = pixels[y1-30:y2+30, x1-30:x2+30

face2๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ”ฝ์…€ ์œ„์น˜๋ฅผ ์กฐ์ •ํ•ด์„œ ์–ผ๊ตด์˜์—ญ์„ ๋„“ํ˜€์ฃผ์—ˆ๋‹ค.

์ฝ”๋“œ ๊ฒฐ๊ณผํ™”๋ฉด 

 

4. MTCNN์œผ๋กœ ์—ฌ๋Ÿฌ ์–ผ๊ตด ์ธ์‹ํ•˜๊ธฐ

(MTCNN์€ ๊ธฐ๋ณธ ๋ฐ˜ํ™˜๊ฐ’์œผ๋กœ ์ œ์ผ ๋จผ์ € ์ธ์‹ํ•œ ์–ผ๊ตด์—ญ์—ญ ์ •๋ณด๋ฅผ ๋ฐ˜ํ™˜ํ•˜์ง€๋งŒ, for๋ฌธ์„ ์จ์„œ ์—ฌ๋Ÿฌ ์–ผ๊ตด๋„ ํ™•๋ณด๊ฐ€ ๊ฐ€๋Šฅํ–ˆ๋‹ค.)

pixels = pyplot.imread('/content/PEOPLE.jpeg')
	# create the detector, using default weights 
detector = MTCNN()
	# detect faces in the image
for results in detector.detect_faces(pixels):
  print(results)

๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜์˜จ๋‹ค.

{'box': [23, 62, 52, 71], 'confidence': 0.9998407363891602, 'keypoints': {'left_eye': (40, 87), 'right_eye': (64, 92), 'nose': (52, 103), 'mouth_left': (37, 113), 'mouth_right': (57, 117)}} {'box': [106, 118, 53, 69], 'confidence': 0.9998244643211365, 'keypoints': {'left_eye': (115, 145), 'right_eye': (138, 142), 'nose': (123, 158), 'mouth_left': (118, 168), 'mouth_right': (143, 165)}} {'box': [146, 15, 40, 50], 'confidence': 0.9995580315589905, 'keypoints': {'left_eye': (160, 32), 'right_eye': (177, 39), 'nose': (165, 47), 'mouth_left': (152, 48), 'mouth_right': (169, 55)}} {'box': [165, 107, 64, 78], 'confidence': 0.9987497329711914, 'keypoints': {'left_eye': (176, 135), 'right_eye': (203, 144), 'nose': (178, 158), 'mouth_left': (168, 161), 'mouth_right': (198, 170)}} {'box': [76, 73, 39, 53], 'confidence': 0.9984415173530579, 'keypoints': {'left_eye': (85, 93), 'right_eye': (104, 95), 'nose': (93, 107), 'mouth_left': (83, 109), 'mouth_right': (104, 111)}} {'box': [100, 21, 45, 58], 'confidence': 0.9919271469116211, 'keypoints': {'left_eye': (112, 43), 'right_eye': (132, 43), 'nose': (122, 57), 'mouth_left': (110, 62), 'mouth_right': (133, 62)}} {'box': [190, 28, 62, 73], 'confidence': 0.970379650592804, 'keypoints': {'left_eye': (205, 52), 'right_eye': (227, 63), 'nose': (203, 70), 'mouth_left': (191, 79), 'mouth_right': (211, 89)}}

 

๊ฒฐ๊ณผ๊ฐ’์—์„œ box๊ฐ’๋งŒ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์œผ๋ฉด  results['box']๋กœ ์‚ฌ์šฉํ•˜๋ฉด ๋œ๋‹ค.

 

#๊ฒฐ๊ณผ๋ฅผ ์ด๋ฏธ์ง€๋กœ ํ™•์ธํ•˜๊ณ  ์‹ถ์„ ๋•Œ๋Š” ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋œ๋‹ค.
for results in detector.detect_faces(pixels):
      x, y, w, h = results['box']
      print(results['box'])
      cv2.rectangle(pixels, (x,y), (x+w, y+h), (255,255,255), thickness=2)
cv2_imshow(pixels)

๋‹ค์ค‘์–ผ๊ตด ์ด๋ฏธ์ง€๋ฅผ ํˆฌ์ž… ํ–ˆ์„ ๋•Œ์˜ ๊ฒฐ๊ณผ

 

5. ์˜์ƒ์—์„œ ์–ผ๊ตด์ธ์‹ ํ•˜๊ธฐ (์ฝ”๋žฉ ํ™˜๊ฒฝ)

#์ฝ”๋žฉ์—์„œ๋Š” ๊ฒฐ๊ณผํ™”๋ฉด์œผ๋กœ ๋™์˜์ƒ์ด ๋งŒ๋“ค์–ด์ง€์ง€ ์•Š์•„, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ฝ”๋“œ๊ฐ€ ํ•„์š”ํ•˜๋‹ค.
from google.colab.patches import cv2_imshow
def videoFaceDetector(cam, required_size=(224, 224)): #์˜์ƒ์—์„œ ์–ผ๊ตด์„ ๊ฒ€์ถœํ•˜๊ธฐ ์œ„ํ•œ ํ•จ์ˆ˜ ์ •์˜
  while True:
    ret, img = cam.read() #์˜์ƒ ์บก์ณ
    try:
      img= cv2.resize(img,dsize=None, fx=1.0,fy=1.0) #์ด๋ฏธ์ง€ ํฌ๊ธฐ ์กฐ์ ˆ
    except:break

    detector=MTCNN() #detector๋กœ MTCNN ์‚ฌ์šฉ 
    for results in detector.detect_faces(img): #์–ผ๊ตด ๋‹ค์ค‘ ์ธ์‹์„ ์œ„ํ•œ ๋ฐ˜๋ณต๋ฌธ
      x, y, w, h = results['box'] #์–ผ๊ตด ์œ„์น˜๊ฐ’
      cv2.rectangle(img, (x,y), (x+w, y+h), (255,255,255), thickness=2) #์ด๋ฏธ์ง€์— ์–ผ๊ตด์˜์—ญ ์ƒ์žํ‘œ์‹œ๋ฅผ ์œ„ํ•œ ์ฝ”๋“œ
      
    cv2_imshow(img)#์ฝ”๋žฉ์—์„œ ์ด๋ฏธ์ง€ ํŒŒ์ผ์„ ๋ณด๊ธฐ(์˜์ƒ์€ ํ”„๋ ˆ์ž„ํ™”๋˜์–ด ํ‘œ์‹œ)
      
    if cv2.waitKey(1) > 0: break # while ๋ฃจํ”„์—์„œ 1msec ๋™์•ˆ์˜ ์ถœ๋ ฅ์„ ๋ณด์—ฌ์คŒ
cam = cv2.VideoCapture('/content/sample.mp4') #์˜์ƒ์—…๋กœ๋“œ
videoFaceDetector(cam) #์‹คํ–‰ (์ฝ”๋žฉ์€ ํ•œ ํ”„๋ ˆ์ž„์”ฉ ๊ฒฐ๊ณผ๊ฐ€ ๋Š๊ธฐ๋ฉด์„œ ํ‘œ์‹œ๋œ๋‹ค.)

๋”ฅ๋Ÿฌ๋‹์œผ๋กœ ์–ผ๊ตด ์ธ์‹ + ๋‚˜์ด์™€ ์„ฑ๋ณ„ ์˜ˆ์ธกํ•˜๊ธฐ

๋”ฅ๋Ÿฌ๋‹์„ ํ™œ์šฉํ•˜์—ฌ ์˜์ƒ ๋‚ด์˜ ์–ผ๊ตด์„ ์ธ์‹ํ•˜๊ณ , ๊ทธ ์‚ฌ๋žŒ์˜ ๋‚˜์ด์™€ ์„ฑ๋ณ„์„ ์˜ˆ์ธกํ•˜๋Š” ํ”„๋กœ์ ํŠธ๋ฅผ ์ง„ํ–‰ํ•˜๊ณ ์ž ํ•œ๋‹ค.

์ผ€๋ผ์Šค ๋“ฑ์„ ์ด์šฉํ•ด์„œ ์‹ ๊ฒฝ๋ง์„ ์ง์ ‘ ๊ตฌํ˜„ํ•˜๋ฉด ์ข‹๊ฒ ์ง€๋งŒ, ๊ฐ„ํŽธํ•˜๊ฒŒ ์˜คํ”ˆ ์†Œ์Šค๋ฅผ ํ™œ์šฉํ•  ์ˆ˜๋„ ์žˆ๋‹ค.

์–ผ๊ตด์ธ์‹ ๋ถ„์•ผ์—์„œ๋Š” Caffe ํ”„๋ ˆ์ž„์›Œํฌ๋‚˜ Cascade ์˜คํ”ˆCV๊ฐ€ ๋Œ€ํ‘œ์ ์ธ ๊ฒƒ ๊ฐ™๋‹ค.

 

 

 

Caffe

(Convolutional Architecture for Fast Feature Embedding)

๋น ๋ฅธ ์†๋„์™€ ๋ชจ๋“ˆ์„ฑ์„ ๋‚ด์„ธ์šด ๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ

๋‚˜๋งŒ์˜ ์‹ ๊ฒฝ๋ง์„ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์—†์ด DIGITS(๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ํ•™์Šต์šฉ webapp)์—์„œ ๊ตฌํ˜„์ด ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ์žฅ์ 

๋‹จ, ๋ชจ๋ธ ํ•™์Šต์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ํ•„์š”!

 

 

GitHub - GilLevi/AgeGenderDeepLearning

Contribute to GilLevi/AgeGenderDeepLearning development by creating an account on GitHub.

github.com

 

 

 

Cascade 

Cascade classifier

OpenCV์˜ ๋Œ€ํ‘œ์ ์ธ API

์–ผ๊ตด ์ธ์‹์— ๋Œ€ํ•ด ํ›ˆ๋ จ๋œ ๋ฐ์ดํ„ฐ(haarcascade)๋ฅผ XML ํ˜•ํƒœ๋กœ ์ œ๊ณต


https://github.com/opencv/opencv/tree/master/data/haarcascades

 

GitHub - opencv/opencv: Open Source Computer Vision Library

Open Source Computer Vision Library. Contribute to opencv/opencv development by creating an account on GitHub.

github.com

 

 

 

[Python] ํŒŒ์ด์ฌ OpenCV๋ฅผ ์ด์šฉํ•œ ์„ฑ๋ณ„ ๋ฐ ๋‚˜์ด ์˜ˆ์ธกํ•˜๊ธฐ

Concept ์ง€๋‚œ ํฌ์ŠคํŒ…์— ์ด์–ด, ์–ผ๊ตด ์ธ์‹ ํ›„ ์ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์„ฑ๋ณ„ ๋ฐ ๋‚˜์ด ์˜ˆ์ธก ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ ํŒŒ์ด์ฌ ๊ธฐ๋ฐ˜ OpenCV๋ฅผ ํ†ตํ•ด ๊ตฌํ˜„ํ•˜์˜€์œผ๋ฉฐ ์ง€๋‚œ ํฌ์ŠคํŒ…์„ ์ฐธ๊ณ ํ•˜์‹œ๋ฉด ๊ธฐ๋ณธ์ ์ธ ์–ผ๊ตด ํƒ์ง€ ์•Œ๊ณ ๋ฆฌ

deep-eye.tistory.com

 

+ Recent posts