Quantcast
Channel: OpenCV Q&A Forum - Latest question feed
Viewing all 53 articles
Browse latest View live

face.predict error in cv3 python 2.7

$
0
0
The following code works in cv2, but neither predict line works in cv3: recognizer = cv2.face.createLBPHFaceRecognizer(radius=2, neighbors=16, grid_x=3, grid_y=3) (prediction, conf) = recognizer.predict(testing.data[i]) #OR.. (prediction, conf) = cv2.face.predict(testing.data[i]) The attempt to call predict results in either : object is not iterable or object as no attribute 'predict' error I found some old discussion related to this Face class, https://github.com/peterbraden/node-opencv/pull/379/files https://github.com/opencv/opencv_contrib/issues/184 but not sure if it's still relevant or if it's even the same issue I have. I'm pretty new to this, so I think I'm just calling it the wrong way, especially since predict is listed in the cv3 documentation: http://docs.opencv.org/3.1.0/dd/d65/classcv_1_1face_1_1FaceRecognizer.html

Problem with knn

$
0
0
Hi there, Im using OpenCV 2.4.12 with Visual Studio 2012. I am trying to classify facial image based 3 age groups. i have extracted the features using PCA in image form. Below is my PCA code. I need help to classify the images using the features extracted using knn. #include "opencv2/contrib/contrib.hpp" #include "opencv2/core/core.hpp" #include "opencv2/highgui/highgui.hpp" #include #include #include using namespace cv; using namespace std; static Mat norm_0_255(InputArray _src) { Mat src = _src.getMat(); // Create and return normalized image: Mat dst; switch(src.channels()) { case 1: cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1); break; case 3: cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC3); break; default: src.copyTo(dst); break; } return dst; } static void read_csv(const string& filename, vector& images, vector& labels, char separator = ';') { std::ifstream file(filename.c_str(), ifstream::in); if (!file) { string error_message = "No valid input file was given, please check the given filename."; CV_Error(CV_StsBadArg, error_message); } string line, path, classlabel; while (getline(file, line)) { stringstream liness(line); getline(liness, path, separator); getline(liness, classlabel); if(!path.empty() && !classlabel.empty()) { images.push_back(imread(path, 0)); labels.push_back(atoi(classlabel.c_str())); } } } int main(int argc, const char *argv[]) { // Check for valid command line arguments, print usage // if no arguments were given. if (argc " images; vector labels; // Read in the data. This can fail if no valid // input filename is given. try { read_csv(fn_csv, images, labels); } catch (cv::Exception& e) { cerr model0 = createEigenFaceRecognizer(); model0->train(images, labels); // save the model to eigenfaces_at.yaml model0->save("eigenfaces_at.yml"); // // // Now create a new Eigenfaces Recognizer // Ptr model1 = createEigenFaceRecognizer(); model1->load("eigenfaces_at.yml"); // The following line predicts the label of a given // test image: int predictedLabel = model1->predict(testSample); // // To get the confidence of a prediction call the model with: // // int predictedLabel = -1; // double confidence = 0.0; // model->predict(testSample, predictedLabel, confidence); // string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel); cout getMat("eigenvalues"); // And we can do the same to display the Eigenvectors (read Eigenfaces): Mat W = model1->getMat("eigenvectors"); // Get the sample mean from the training data Mat mean = model1->getMat("mean"); // Display or save: if(argc == 2) { imshow("mean", norm_0_255(mean.reshape(1, images[0].rows))); } else { imwrite(format("%s/mean.png", output_folder.c_str()), norm_0_255(mean.reshape(1, images[0].rows))); } // Display or save the Eigenfaces: for (int i = 0; i (i)); cout

How I can extract a face features as vector

$
0
0
Hello! I use OpenCV with Python to detect faces in images (with Haar cascade detector). Everything work fine, but I have a question. How I can extract a face features and put them into vector or array? For example - I detect a face, get coordinates of the face in the image, extract the face, normalize it and then I want to extract the features that differs from face to face. For me doesn't matter what is exactly meaning of the integers from feature vector, I only want to be different between two faces. How I can done this? Any suggestions? BR, Alex

Which face landmarks do the 68 points of dlib correspond to?

$
0
0
Which face landmarks do the 68 points of dlib correspond to? I've looked for several tutorials online and it seems that they just somehow know where each of the points are in the array... Also, some of them vary the point numbers for mouth, for example - 49,60 instead of 49,59

i have this Error: Assertion failed (scn == 3 || scn == 4) in cv::cvtColor

$
0
0
i'm trying to face detect and motion detect for my internship... here is my code.... public class FaceDetect { private Button cameraButton, cropButton, pictureButton, close, compare; private ImageView originalFrame; private CheckBox haarClassifier; private CheckBox lbpClassifier; static Mat imag = null; private ScheduledExecutorService timer; private VideoCapture capture; private boolean cameraActive; private CascadeClassifier faceCascade; private int absoluteFaceSize; MatOfRect faces ; Mat grayFrame = new Mat(),frame; private Rect[] facesArray; protected void init() { this.capture = new VideoCapture(); this.faceCascade = new CascadeClassifier(); this.absoluteFaceSize = 0; } protected void startCamera() { // set a fixed width for the frame originalFrame.setFitWidth(600); // preserve image ratio originalFrame.setPreserveRatio(true); if (!this.cameraActive) { // disable setting checkboxes this.haarClassifier.setDisable(true); this.lbpClassifier.setDisable(true); this.pictureButton.setDisable(false); this.cropButton.setDisable(false); this.compare.setDisable(false); // start the video capture this.capture.open(0); // is the video stream available? if (this.capture.isOpened()) { this.cameraActive = true; // grab a frame every 33 ms (30 frames/sec) Runnable frameGrabber = new Runnable() { @Override public void run() { Image imageToShow = grabFrame(); originalFrame.setImage(imageToShow); } }; this.timer = Executors.newSingleThreadScheduledExecutor(); this.timer.scheduleAtFixedRate(frameGrabber, 0, 33, TimeUnit.MILLISECONDS); // update the button content this.cameraButton.setText("Stop Camera"); } else { // log the error System.err.println("Failed to open the camera connection..."); } } else { // the camera is not active at this point this.cameraActive = false; // update again the button content this.cameraButton.setText("Start Camera"); // enable classifiers checkboxes this.haarClassifier.setDisable(false); this.lbpClassifier.setDisable(false); this.pictureButton.setDisable(true); this.cropButton.setDisable(true); this.compare.setDisable(true); // stop the timer try { this.timer.shutdown(); this.timer.awaitTermination(33, TimeUnit.MILLISECONDS); } catch (InterruptedException e) { // log the exception System.err.println("Exception in stopping the frame capture, trying to release the camera now... " + e); } // release the camera this.capture.release(); // clean the frame this.originalFrame.setImage(null); } } private Image grabFrame(){ // init everything Image imageToShow = null; Mat frame = new Mat(); // check if the capture is open if (this.capture.isOpened()){ try{ // read the current frame this.capture.read(frame); frame=motion(frame); // if the frame is not empty, process it if (!frame.empty()){ // face detection frame= new Mat(frame.size(), CvType.CV_8UC1); this.detectAndDisplay(frame); /*//detection of motion imag=frame; ArrayList array = new ArrayList(); Mat outerBox = new Mat(frame.size(), CvType.CV_8UC1); Imgproc.cvtColor(frame, outerBox, Imgproc.COLOR_BGR2GRAY); Imgproc.GaussianBlur(outerBox, outerBox, new Size(3, 3), 0); Mat diff_frame = new Mat(outerBox.size(), CvType.CV_8UC1); Mat tempon_frame = new Mat(outerBox.size(), CvType.CV_8UC1); Core.subtract(outerBox, tempon_frame, diff_frame); Imgproc.adaptiveThreshold(diff_frame, diff_frame, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 5, 2); array = detection_contours(diff_frame); Iterator it2 = array.iterator(); Rect obj = it2.next(); //while (it2.hasNext()) Imgproc.rectangle(imag, obj.br(), obj.tl(), new Scalar(0, 255, 0), 1);*/ ArrayList array = new ArrayList(); array = detection_contours(motion(frame)); if (array.size() > 0) { Iterator it2 = array.iterator(); while (it2.hasNext()) { Rect obj = it2.next(); Imgproc.rectangle(imag, obj.br(), obj.tl(),new Scalar(0, 255, 0), 1); } } // convert the Mat object (OpenCV) to Image (JavaFX) imageToShow = mat2Image(imag); } } catch (Exception e){ // log the (full) error System.err.println("ERROR: " + e); } } return imageToShow; } private void detectAndDisplay(Mat fr ){ faces = new MatOfRect(); //Mat grayFrame = new Mat(); frame =fr; // convert the frame in gray scale Imgproc.cvtColor(frame, grayFrame, Imgproc.COLOR_BGR2GRAY); // equalize the frame histogram to improve the result Imgproc.equalizeHist(grayFrame, grayFrame); // compute minimum face size (20% of the frame height, in our case) if (this.absoluteFaceSize == 0){ int height = grayFrame.rows(); if (Math.round(height * 0.2f) > 0) { this.absoluteFaceSize = Math.round(height * 0.2f); } } // detect faces this.faceCascade.detectMultiScale(grayFrame, faces, 1.1, 2, 0 | Objdetect.CASCADE_SCALE_IMAGE, new Size(this.absoluteFaceSize, this.absoluteFaceSize), new Size()); // each rectangle in faces is a face: draw them! facesArray = faces.toArray(); for (int i = 0; i > 16) & 0xff; int g1 = (rgb1 >> 8) & 0xff; int b1 = (rgb1 ) & 0xff; int r2 = (rgb2 >> 16) & 0xff; int g2 = (rgb2 >> 8) & 0xff; int b2 = (rgb2 ) & 0xff; diff += Math.abs(r1 - r2); diff += Math.abs(g1 - g2); diff += Math.abs(b1 - b2); } } double n = width1 * height1 * 3; double p = diff / n / 255.0; System.out.println("diff percent: " + (p * 100.0)); String x ="diff percent: " + (p * 100.0); JFrame fenetre = new JFrame(); fenetre.setTitle("comparaison"); fenetre.setSize(400, 100); fenetre.setLocationRelativeTo(null); JPanel pan1 = new JPanel(); pan1.setLayout(new BorderLayout()); JLabel l = new JLabel(x); pan1.add(l); fenetre.add(pan1, BorderLayout.EAST); fenetre.setVisible(true); } public static ArrayList detection_contours(Mat outmat) { Mat v = new Mat(); Mat vv = outmat.clone(); List contours = new ArrayList(); Imgproc.findContours(vv, contours, v, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE); double maxArea = 100; int maxAreaIdx = -1; Rect r = null; ArrayList rect_array = new ArrayList(); for (int idx = 0; idx maxArea) { // maxArea = contourarea; maxAreaIdx = idx; r = Imgproc.boundingRect(contours.get(maxAreaIdx)); rect_array.add(r); //Imgproc.drawContours(imag, contours, maxAreaIdx, new Scalar(0,0, 255)); } } v.release(); return rect_array; } public Mat motion(Mat f){ Mat frame = f; Mat outerBox = new Mat(); Mat diff_frame = null; Mat tempon_frame = null; Size sz = new Size(640, 480); int i = 0; while (true) { //camera.open(0); //if (camera.read(frame)) { Imgproc.resize(frame, frame, sz); imag = frame.clone(); outerBox = new Mat(frame.size(), CvType.CV_8UC1); Imgproc.cvtColor(frame, outerBox, Imgproc.COLOR_BGR2GRAY); Imgproc.GaussianBlur(outerBox, outerBox, new Size(3, 3), 0); if (i == 0) { diff_frame = new Mat(frame.size(), CvType.CV_8UC1); tempon_frame = new Mat(outerBox.size(), CvType.CV_8UC1); } if (i == 1) { Core.subtract(outerBox, tempon_frame, diff_frame); Imgproc.adaptiveThreshold(diff_frame, diff_frame, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 5, 2); } i = 1; return imag; }} }

How to merge new LBP training to existing one?

$
0
0
Hi, I have done several implementations of face recognition and currently working with LBP training models. I can create LBP face training models and call the FaceRecognizer update() to append new faces traning. Now, considering that I have 2 raspberry boards running the face recognizer service, I would need to replicate the XML training file into both boards evertyime I add a new face. Let's say I have a server that holds the faces (10 to 20 images per person) and the training XML itself. Now, I see 3 ways to let both raspberry boards updated: 1. The server service runs the update() to add up new faces to the existing XML training and then the file is downloaded to both boards. a) PROS: Only 1 file as training model all the time. b) CONS: File gets bigger and bigger all the time c) CONS: Raspberry will download bigger files (take time..) 2. The server service runs a NEW training on new faces and create a NEW XML file. Then the boards keep track of new XML files and APPEND it to its actual file. a) PROS: Server will keep lots of versioned .xml files b) PROS: Raspberry will download only the last training (small file) **c) CONS: HOW to merge/append this new XML file to existing one as I cannot use update()?** 3. Boards download the new available server's faces set and each board update() its own XML LBP training. a) PROS: Board will download only a few images

Face Tracking, pupils detection, Face rotation

$
0
0
I need to track eye direction. Is there any solution/api to detects pupils? Is there any known solution for tracking face rotation/direction changes. Thanks!

Unable to track faces. How to update trackers ???

$
0
0
I am doing a multiple faces detection & tracking. But the trackers are not updating properly. The code runs without errors when compiling. During runtime I get the below message (not error tho) ''Trackers are initialized correctly. unable to track" Here is the code: // Detect faces std::vector faces; Rect2d face_rect2d; face_cascade.detectMultiScale( image, faces, 1.2, 2, 0|CV_HAAR_SCALE_IMAGE, Size(min_face_size, min_face_size),Size(max_face_size, max_face_size) ); for(unsigned int i = 0; i tracker = Tracker::create( "MEDIANFLOW" ); // Create tracker if (tracker.empty()) { std::cerr init(image, face_rect2d)) { std::cerr update(image, face_rect2d))) { printf("unable to track\n"); } cv::rectangle(image,face_rect2d, cv::Scalar(255, 0, 255), 2, 1); }

What file do I include to use the FaceRecognizer class in the iOS framework?

$
0
0
Hi, I'm trying to implement the Face Recognizer class using the instructions I found [here](http://docs.opencv.org/trunk/da/d60/tutorial_face_main.html) I noticed it includes a file called `opencv2/face.hpp` which seems to contain the `cv::face` namespace and the `BasicFaceRecognizer` class. The problem is that I cant seem to find the `face.hpp` file in the opencv iOS framework which I downloaded from http://opencv.org/releases.html (I'm using OpenCV 3.2) What file do I need to include in my project to use the face recognition API? Thank You

I want to estimate age using face recognition

$
0
0
If you know the source code or method to recognize wrinkles or estimate age, please answer. Thank you

Face detection using Cascade Classifier in opencv pyhton

$
0
0
I'm trying to run the face detection code . But unable to fix the error . Help me to fix it . Thanks in advance Find the code below import numpy as np import cv2 face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml') img = cv2.imread('11.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv2.Rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi_gray) for(ex,ey,ew,eh) in eyes: cv2.Rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv2.imshow('img',img) cv2.waitKey(0) cv2.destroyAllWindows() Error Message: Traceback (most recent call last): File "C:\Users\vmadoori\Desktop\image processing\face_eye_detection.py", line 4, in face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') error: C:\build\master_winpack-bindings-win32-vc14-static\opencv\modules\core\src\persistence.cpp:4422: error: (-49) Input file is empty in function cvOpenFileStorage

How to construct a 3d face from 2d images in openCV?

$
0
0
Hello I'd like to know what openCV offers nowadays to construct a 3d face starting from 2d images. I found a similar post on the openCV forum which is now already 4 years old (http://answers.opencv.org/question/25841/conversion-of-2d-images-to-3d-and-face-recognition/). Does openCV now offer new solutions for this? If not, what other newer possibilities are there nowadays. The endgoal would be to be able to do feature extraction for every face, in order to implement face recognition.

Face Recognition error line 1010

$
0
0
I am a beginner in opencv project. I'm learning the tutorial in this link [link text](http://docs.opencv.org/2.4/modules/contrib/doc/facerec/tutorial/facerec_video_recognition.html) I try to solve this error. I' using opencv version 2.4.13.3 and Visual Studio 2017 on Window 10 ![image description](/upfiles/15029397909700705.jpg) my text file content ![image description](/upfiles/15029399032974651.jpg) my code from opencv website. (http://docs.opencv.org/2.4/modules/contrib/doc/facerec/tutorial/facerec_video_recognition.html) Please help me find how to solve this error. Thank you so much. ^-^
Viewing all 53 articles
Browse latest View live




Latest Images