Quantcast
Channel: OpenCV Q&A Forum - Latest question feed
Viewing all 53 articles
Browse latest View live

face.predict error in cv3 python 2.7

$
0
0
The following code works in cv2, but neither predict line works in cv3: recognizer = cv2.face.createLBPHFaceRecognizer(radius=2, neighbors=16, grid_x=3, grid_y=3) (prediction, conf) = recognizer.predict(testing.data[i]) #OR.. (prediction, conf) = cv2.face.predict(testing.data[i]) The attempt to call predict results in either : object is not iterable or object as no attribute 'predict' error I found some old discussion related to this Face class, https://github.com/peterbraden/node-opencv/pull/379/files https://github.com/opencv/opencv_contrib/issues/184 but not sure if it's still relevant or if it's even the same issue I have. I'm pretty new to this, so I think I'm just calling it the wrong way, especially since predict is listed in the cv3 documentation: http://docs.opencv.org/3.1.0/dd/d65/classcv_1_1face_1_1FaceRecognizer.html

Problem with knn

$
0
0
Hi there, Im using OpenCV 2.4.12 with Visual Studio 2012. I am trying to classify facial image based 3 age groups. i have extracted the features using PCA in image form. Below is my PCA code. I need help to classify the images using the features extracted using knn. #include "opencv2/contrib/contrib.hpp" #include "opencv2/core/core.hpp" #include "opencv2/highgui/highgui.hpp" #include #include #include using namespace cv; using namespace std; static Mat norm_0_255(InputArray _src) { Mat src = _src.getMat(); // Create and return normalized image: Mat dst; switch(src.channels()) { case 1: cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1); break; case 3: cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC3); break; default: src.copyTo(dst); break; } return dst; } static void read_csv(const string& filename, vector& images, vector& labels, char separator = ';') { std::ifstream file(filename.c_str(), ifstream::in); if (!file) { string error_message = "No valid input file was given, please check the given filename."; CV_Error(CV_StsBadArg, error_message); } string line, path, classlabel; while (getline(file, line)) { stringstream liness(line); getline(liness, path, separator); getline(liness, classlabel); if(!path.empty() && !classlabel.empty()) { images.push_back(imread(path, 0)); labels.push_back(atoi(classlabel.c_str())); } } } int main(int argc, const char *argv[]) { // Check for valid command line arguments, print usage // if no arguments were given. if (argc < 2) { cout << "usage: " << argv[0] << " " << endl; exit(1); } string output_folder = "."; if (argc == 3) { output_folder = string(argv[2]); } // Get the path to your CSV. string fn_csv = string(argv[1]); // These vectors hold the images and corresponding labels. vector images; vector labels; // Read in the data. This can fail if no valid // input filename is given. try { read_csv(fn_csv, images, labels); } catch (cv::Exception& e) { cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl; // nothing more we can do exit(1); } // Quit if there are not enough images for this demo. if(images.size() <= 1) { string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!"; CV_Error(CV_StsError, error_message); } // Get the height from the first image. We'll need this // later in code to reshape the images to their original // size: int height = images[0].rows; // The following lines simply get the last images from // your dataset and remove it from the vector. This is // done, so that the training data (which we learn the // cv::FaceRecognizer on) and the test data we test // the model with, do not overlap. Mat testSample = images[images.size() - 1]; int testLabel = labels[labels.size() - 1]; images.pop_back(); labels.pop_back(); // The following lines create an Eigenfaces model for // face recognition and train it with the images and // labels read from the given CSV file. // This here is a full PCA, if you just want to keep // 10 principal components (read Eigenfaces), then call // the factory method like this: // // cv::createEigenFaceRecognizer(10); // // If you want to create a FaceRecognizer with a // confidence threshold (e.g. 123.0), call it with: // // cv::createEigenFaceRecognizer(10, 123.0); // // If you want to use _all_ Eigenfaces and have a threshold, // then call the method like this: // // cv::createEigenFaceRecognizer(0, 123.0); // Ptr model0 = createEigenFaceRecognizer(); model0->train(images, labels); // save the model to eigenfaces_at.yaml model0->save("eigenfaces_at.yml"); // // // Now create a new Eigenfaces Recognizer // Ptr model1 = createEigenFaceRecognizer(); model1->load("eigenfaces_at.yml"); // The following line predicts the label of a given // test image: int predictedLabel = model1->predict(testSample); // // To get the confidence of a prediction call the model with: // // int predictedLabel = -1; // double confidence = 0.0; // model->predict(testSample, predictedLabel, confidence); // string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel); cout << result_message << endl; // Here is how to get the eigenvalues of this Eigenfaces model: Mat eigenvalues = model1->getMat("eigenvalues"); // And we can do the same to display the Eigenvectors (read Eigenfaces): Mat W = model1->getMat("eigenvectors"); // Get the sample mean from the training data Mat mean = model1->getMat("mean"); // Display or save: if(argc == 2) { imshow("mean", norm_0_255(mean.reshape(1, images[0].rows))); } else { imwrite(format("%s/mean.png", output_folder.c_str()), norm_0_255(mean.reshape(1, images[0].rows))); } // Display or save the Eigenfaces: for (int i = 0; i < min(10, W.cols); i++) { string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at(i)); cout << msg << endl; // get eigenvector #i Mat ev = W.col(i).clone(); // Reshape to original size & normalize to [0...255] for imshow. Mat grayscale = norm_0_255(ev.reshape(1, height)); // Show the image & apply a Jet colormap for better sensing. Mat cgrayscale; applyColorMap(grayscale, cgrayscale, COLORMAP_JET); // Display or save: if(argc == 2) { imshow(format("eigenface_%d", i), cgrayscale); } else { imwrite(format("%s/eigenface_%d.png", output_folder.c_str(), i), norm_0_255(cgrayscale)); } } // Display or save the image reconstruction at some predefined steps: for(int num_components = 300; num_components < 10; num_components+=15) { // slice the eigenvectors from the model Mat evs = Mat(W, Range::all(), Range(0, num_components)); Mat projection = subspaceProject(evs, mean, images[0].reshape(1,1)); Mat reconstruction = subspaceReconstruct(evs, mean, projection); // Normalize the result: reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows)); // Display or save: if(argc == 2) { imshow(format("eigenface_reconstruction_%d", num_components), reconstruction); } else { imwrite(format("%s/eigenface_reconstruction_%d.png", output_folder.c_str(), num_components), reconstruction); } } // Display if we are not writing to an output folder: if(argc == 2) { waitKey(0); } return 0; }

How I can extract a face features as vector

$
0
0
Hello! I use OpenCV with Python to detect faces in images (with Haar cascade detector). Everything work fine, but I have a question. How I can extract a face features and put them into vector or array? For example - I detect a face, get coordinates of the face in the image, extract the face, normalize it and then I want to extract the features that differs from face to face. For me doesn't matter what is exactly meaning of the integers from feature vector, I only want to be different between two faces. How I can done this? Any suggestions? BR, Alex

Which face landmarks do the 68 points of dlib correspond to?

$
0
0
Which face landmarks do the 68 points of dlib correspond to? I've looked for several tutorials online and it seems that they just somehow know where each of the points are in the array... Also, some of them vary the point numbers for mouth, for example - 49,60 instead of 49,59

i have this Error: Assertion failed (scn == 3 || scn == 4) in cv::cvtColor

$
0
0
i'm trying to face detect and motion detect for my internship... here is my code.... public class FaceDetect { private Button cameraButton, cropButton, pictureButton, close, compare; private ImageView originalFrame; private CheckBox haarClassifier; private CheckBox lbpClassifier; static Mat imag = null; private ScheduledExecutorService timer; private VideoCapture capture; private boolean cameraActive; private CascadeClassifier faceCascade; private int absoluteFaceSize; MatOfRect faces ; Mat grayFrame = new Mat(),frame; private Rect[] facesArray; protected void init() { this.capture = new VideoCapture(); this.faceCascade = new CascadeClassifier(); this.absoluteFaceSize = 0; } protected void startCamera() { // set a fixed width for the frame originalFrame.setFitWidth(600); // preserve image ratio originalFrame.setPreserveRatio(true); if (!this.cameraActive) { // disable setting checkboxes this.haarClassifier.setDisable(true); this.lbpClassifier.setDisable(true); this.pictureButton.setDisable(false); this.cropButton.setDisable(false); this.compare.setDisable(false); // start the video capture this.capture.open(0); // is the video stream available? if (this.capture.isOpened()) { this.cameraActive = true; // grab a frame every 33 ms (30 frames/sec) Runnable frameGrabber = new Runnable() { @Override public void run() { Image imageToShow = grabFrame(); originalFrame.setImage(imageToShow); } }; this.timer = Executors.newSingleThreadScheduledExecutor(); this.timer.scheduleAtFixedRate(frameGrabber, 0, 33, TimeUnit.MILLISECONDS); // update the button content this.cameraButton.setText("Stop Camera"); } else { // log the error System.err.println("Failed to open the camera connection..."); } } else { // the camera is not active at this point this.cameraActive = false; // update again the button content this.cameraButton.setText("Start Camera"); // enable classifiers checkboxes this.haarClassifier.setDisable(false); this.lbpClassifier.setDisable(false); this.pictureButton.setDisable(true); this.cropButton.setDisable(true); this.compare.setDisable(true); // stop the timer try { this.timer.shutdown(); this.timer.awaitTermination(33, TimeUnit.MILLISECONDS); } catch (InterruptedException e) { // log the exception System.err.println("Exception in stopping the frame capture, trying to release the camera now... " + e); } // release the camera this.capture.release(); // clean the frame this.originalFrame.setImage(null); } } private Image grabFrame(){ // init everything Image imageToShow = null; Mat frame = new Mat(); // check if the capture is open if (this.capture.isOpened()){ try{ // read the current frame this.capture.read(frame); frame=motion(frame); // if the frame is not empty, process it if (!frame.empty()){ // face detection frame= new Mat(frame.size(), CvType.CV_8UC1); this.detectAndDisplay(frame); /*//detection of motion imag=frame; ArrayList array = new ArrayList(); Mat outerBox = new Mat(frame.size(), CvType.CV_8UC1); Imgproc.cvtColor(frame, outerBox, Imgproc.COLOR_BGR2GRAY); Imgproc.GaussianBlur(outerBox, outerBox, new Size(3, 3), 0); Mat diff_frame = new Mat(outerBox.size(), CvType.CV_8UC1); Mat tempon_frame = new Mat(outerBox.size(), CvType.CV_8UC1); Core.subtract(outerBox, tempon_frame, diff_frame); Imgproc.adaptiveThreshold(diff_frame, diff_frame, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 5, 2); array = detection_contours(diff_frame); Iterator it2 = array.iterator(); Rect obj = it2.next(); //while (it2.hasNext()) Imgproc.rectangle(imag, obj.br(), obj.tl(), new Scalar(0, 255, 0), 1);*/ ArrayList array = new ArrayList(); array = detection_contours(motion(frame)); if (array.size() > 0) { Iterator it2 = array.iterator(); while (it2.hasNext()) { Rect obj = it2.next(); Imgproc.rectangle(imag, obj.br(), obj.tl(),new Scalar(0, 255, 0), 1); } } // convert the Mat object (OpenCV) to Image (JavaFX) imageToShow = mat2Image(imag); } } catch (Exception e){ // log the (full) error System.err.println("ERROR: " + e); } } return imageToShow; } private void detectAndDisplay(Mat fr ){ faces = new MatOfRect(); //Mat grayFrame = new Mat(); frame =fr; // convert the frame in gray scale Imgproc.cvtColor(frame, grayFrame, Imgproc.COLOR_BGR2GRAY); // equalize the frame histogram to improve the result Imgproc.equalizeHist(grayFrame, grayFrame); // compute minimum face size (20% of the frame height, in our case) if (this.absoluteFaceSize == 0){ int height = grayFrame.rows(); if (Math.round(height * 0.2f) > 0) { this.absoluteFaceSize = Math.round(height * 0.2f); } } // detect faces this.faceCascade.detectMultiScale(grayFrame, faces, 1.1, 2, 0 | Objdetect.CASCADE_SCALE_IMAGE, new Size(this.absoluteFaceSize, this.absoluteFaceSize), new Size()); // each rectangle in faces is a face: draw them! facesArray = faces.toArray(); for (int i = 0; i < facesArray.length; i++) Imgproc.rectangle(frame, facesArray[i].tl(), facesArray[i].br(), new Scalar(0, 255, 255), 3); /*while (it2.hasNext())*/ } protected void haarSelected(Event event) { // check whether the lpb checkbox is selected and deselect it if (this.lbpClassifier.isSelected()) this.lbpClassifier.setSelected(false); this.checkboxSelection("C://workspace/recognition/src/application/haarcascades/haarcascade_frontalface_alt_tree.xml"); } /** * The action triggered by selecting the LBP Classifier checkbox. It loads * the trained set to be used for frontal face detection. */ @FXML protected void lbpSelected(Event event) { // check whether the haar checkbox is selected and deselect it if (this.haarClassifier.isSelected()) this.haarClassifier.setSelected(false); this.checkboxSelection("c://workspace/recognition/src/application/lbpcascades/lbpcascade_frontalface.xml"); } /** * Method for loading a classifier trained set from disk * * @param classifierPath * the path on disk where a classifier trained set is located */ private void checkboxSelection(String classifierPath) { // load the classifier(s) this.faceCascade.load(classifierPath); // now the video capture can start this.cameraButton.setDisable(false); } /** * Convert a Mat object (OpenCV) in the corresponding Image for JavaFX * * @param frame * the {@link Mat} representing the current frame * @return the {@link Image} to show */ private Image mat2Image(Mat frame) { // create a temporary buffer MatOfByte buffer = new MatOfByte(); // encode the frame in the buffer, according to the PNG format Imgcodecs.imencode(".png", frame, buffer); // build and return an Image created from the image encoded in the // buffer return new Image(new ByteArrayInputStream(buffer.toArray())); } /** * event for taking a picture * **/ @FXML protected void takeAPic(){ BufferedImage image; image = SwingFXUtils.fromFXImage(grabFrame(), null); saveImage(image,2); } //Save an image public static boolean saveImage(BufferedImage img, int v) { try { int val= v; File outputfile = new File("C://Pictures/"+val+"x.png"); ImageIO.write(img, "jpg", outputfile); } catch (Exception e) { System.out.println("error"); } return true; } public static boolean saveImage(BufferedImage img, Character c) { try { Character val=c; File outputfile = new File("C://Pictures/"+val+"compare.png"); ImageIO.write(img, "jpg", outputfile); } catch (Exception e) { System.out.println("error"); } return true; } @FXML public void cropFace(){ for (int i = 0; i < facesArray.length; i++){ Rect rectCrop = new Rect(facesArray[i].tl(), facesArray[i].br()); Mat imCrop= new Mat(frame,rectCrop); Image img=mat2Image(imCrop); BufferedImage image; image = SwingFXUtils.fromFXImage(img, null); saveImage(image,3); } } @FXML public void close(){ if (this.capture.isOpened()){this.capture.release();} System.exit(0); } @FXML public void compare(){ BufferedImage img1=null,image=null,img2=null; image = SwingFXUtils.fromFXImage(grabFrame(), null); saveImage(image,10); try { Thread.sleep(2000); } catch (InterruptedException e2) { e2.printStackTrace(); } image = SwingFXUtils.fromFXImage(grabFrame(), null); saveImage(image,11); try { img1 = ImageIO.read(new File("C://Pictures/10x.png")); img2 = ImageIO.read(new File("C://Pictures/11x.png")); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } int width1 = img1.getWidth(null); int width2 = img2.getWidth(null); int height1 = img1.getHeight(null); int height2 = img2.getHeight(null); if ((width1 != width2) || (height1 != height2)) { System.err.println("Error: Images dimensions mismatch"); System.exit(1); } long diff = 0; for (int y = 0; y < height1; y++) { for (int x = 0; x < width1; x++) { int rgb1 = img1.getRGB(x, y); int rgb2 = img2.getRGB(x, y); int r1 = (rgb1 >> 16) & 0xff; int g1 = (rgb1 >> 8) & 0xff; int b1 = (rgb1 ) & 0xff; int r2 = (rgb2 >> 16) & 0xff; int g2 = (rgb2 >> 8) & 0xff; int b2 = (rgb2 ) & 0xff; diff += Math.abs(r1 - r2); diff += Math.abs(g1 - g2); diff += Math.abs(b1 - b2); } } double n = width1 * height1 * 3; double p = diff / n / 255.0; System.out.println("diff percent: " + (p * 100.0)); String x ="diff percent: " + (p * 100.0); JFrame fenetre = new JFrame(); fenetre.setTitle("comparaison"); fenetre.setSize(400, 100); fenetre.setLocationRelativeTo(null); JPanel pan1 = new JPanel(); pan1.setLayout(new BorderLayout()); JLabel l = new JLabel(x); pan1.add(l); fenetre.add(pan1, BorderLayout.EAST); fenetre.setVisible(true); } public static ArrayList detection_contours(Mat outmat) { Mat v = new Mat(); Mat vv = outmat.clone(); List contours = new ArrayList(); Imgproc.findContours(vv, contours, v, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE); double maxArea = 100; int maxAreaIdx = -1; Rect r = null; ArrayList rect_array = new ArrayList(); for (int idx = 0; idx < contours.size(); idx++) { Mat contour = contours.get(idx); double contourarea = Imgproc.contourArea(contour); if (contourarea > maxArea) { // maxArea = contourarea; maxAreaIdx = idx; r = Imgproc.boundingRect(contours.get(maxAreaIdx)); rect_array.add(r); //Imgproc.drawContours(imag, contours, maxAreaIdx, new Scalar(0,0, 255)); } } v.release(); return rect_array; } public Mat motion(Mat f){ Mat frame = f; Mat outerBox = new Mat(); Mat diff_frame = null; Mat tempon_frame = null; Size sz = new Size(640, 480); int i = 0; while (true) { //camera.open(0); //if (camera.read(frame)) { Imgproc.resize(frame, frame, sz); imag = frame.clone(); outerBox = new Mat(frame.size(), CvType.CV_8UC1); Imgproc.cvtColor(frame, outerBox, Imgproc.COLOR_BGR2GRAY); Imgproc.GaussianBlur(outerBox, outerBox, new Size(3, 3), 0); if (i == 0) { diff_frame = new Mat(frame.size(), CvType.CV_8UC1); tempon_frame = new Mat(outerBox.size(), CvType.CV_8UC1); } if (i == 1) { Core.subtract(outerBox, tempon_frame, diff_frame); Imgproc.adaptiveThreshold(diff_frame, diff_frame, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 5, 2); } i = 1; return imag; }} }

How to merge new LBP training to existing one?

$
0
0
Hi, I have done several implementations of face recognition and currently working with LBP training models. I can create LBP face training models and call the FaceRecognizer update() to append new faces traning. Now, considering that I have 2 raspberry boards running the face recognizer service, I would need to replicate the XML training file into both boards evertyime I add a new face. Let's say I have a server that holds the faces (10 to 20 images per person) and the training XML itself. Now, I see 3 ways to let both raspberry boards updated: 1. The server service runs the update() to add up new faces to the existing XML training and then the file is downloaded to both boards. a) PROS: Only 1 file as training model all the time. b) CONS: File gets bigger and bigger all the time c) CONS: Raspberry will download bigger files (take time..) 2. The server service runs a NEW training on new faces and create a NEW XML file. Then the boards keep track of new XML files and APPEND it to its actual file. a) PROS: Server will keep lots of versioned .xml files b) PROS: Raspberry will download only the last training (small file) **c) CONS: HOW to merge/append this new XML file to existing one as I cannot use update()?** 3. Boards download the new available server's faces set and each board update() its own XML LBP training. a) PROS: Board will download only a few images < 1MB. b) PROS: Boards just need to run update() to update the training file Does any of you have implemented some kind work like this? Any suggestion? Thanks! Regards, Sylvio

Face Tracking, pupils detection, Face rotation

$
0
0
I need to track eye direction. Is there any solution/api to detects pupils? Is there any known solution for tracking face rotation/direction changes. Thanks!

Unable to track faces. How to update trackers ???

$
0
0
I am doing a multiple faces detection & tracking. But the trackers are not updating properly. The code runs without errors when compiling. During runtime I get the below message (not error tho) ''Trackers are initialized correctly. unable to track" Here is the code: // Detect faces std::vector faces; Rect2d face_rect2d; face_cascade.detectMultiScale( image, faces, 1.2, 2, 0|CV_HAAR_SCALE_IMAGE, Size(min_face_size, min_face_size),Size(max_face_size, max_face_size) ); for(unsigned int i = 0; i < faces.size(); ++i) { face_rect2d = faces[i]; rectangle(image,face_rect2d, Scalar( 255, 255, 0 ), 1, 4 ); // Draw the detections Ptr tracker = Tracker::create( "MEDIANFLOW" ); // Create tracker if (tracker.empty()) { std::cerr << "***Error in the instantiation of the tracker...***\n"; return -1; } // Check if they are initialized if (!tracker->init(image, face_rect2d)) { std::cerr << "***Could not initialize tracker...***\n"; return -1; } else { std::cout << "Trackers are initialized correctly." << std::endl; } // Check if they are updated if (!(tracker->update(image, face_rect2d))) { printf("unable to track\n"); } cv::rectangle(image,face_rect2d, cv::Scalar(255, 0, 255), 2, 1); }

What file do I include to use the FaceRecognizer class in the iOS framework?

$
0
0
Hi, I'm trying to implement the Face Recognizer class using the instructions I found [here](http://docs.opencv.org/trunk/da/d60/tutorial_face_main.html) I noticed it includes a file called `opencv2/face.hpp` which seems to contain the `cv::face` namespace and the `BasicFaceRecognizer` class. The problem is that I cant seem to find the `face.hpp` file in the opencv iOS framework which I downloaded from http://opencv.org/releases.html (I'm using OpenCV 3.2) What file do I need to include in my project to use the face recognition API? Thank You

I want to estimate age using face recognition

$
0
0
If you know the source code or method to recognize wrinkles or estimate age, please answer. Thank you

Face detection using Cascade Classifier in opencv pyhton

$
0
0
I'm trying to run the face detection code . But unable to fix the error . Help me to fix it . Thanks in advance Find the code below import numpy as np import cv2 face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml') img = cv2.imread('11.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv2.Rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi_gray) for(ex,ey,ew,eh) in eyes: cv2.Rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv2.imshow('img',img) cv2.waitKey(0) cv2.destroyAllWindows() Error Message: Traceback (most recent call last): File "C:\Users\vmadoori\Desktop\image processing\face_eye_detection.py", line 4, in face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') error: C:\build\master_winpack-bindings-win32-vc14-static\opencv\modules\core\src\persistence.cpp:4422: error: (-49) Input file is empty in function cvOpenFileStorage

How to construct a 3d face from 2d images in openCV?

$
0
0
Hello I'd like to know what openCV offers nowadays to construct a 3d face starting from 2d images. I found a similar post on the openCV forum which is now already 4 years old (http://answers.opencv.org/question/25841/conversion-of-2d-images-to-3d-and-face-recognition/). Does openCV now offer new solutions for this? If not, what other newer possibilities are there nowadays. The endgoal would be to be able to do feature extraction for every face, in order to implement face recognition.

Face Recognition error line 1010

$
0
0
I am a beginner in opencv project. I'm learning the tutorial in this link [link text](http://docs.opencv.org/2.4/modules/contrib/doc/facerec/tutorial/facerec_video_recognition.html) I try to solve this error. I' using opencv version 2.4.13.3 and Visual Studio 2017 on Window 10 ![image description](/upfiles/15029397909700705.jpg) my text file content ![image description](/upfiles/15029399032974651.jpg) my code from opencv website. (http://docs.opencv.org/2.4/modules/contrib/doc/facerec/tutorial/facerec_video_recognition.html) Please help me find how to solve this error. Thank you so much. ^-^
Viewing all 53 articles
Browse latest View live