admin管理员组

文章数量:1323716

I am using face-api library. .js

I am trying to get the face position inside the video.

I would like to make an application. It can set my face first position. And then it can give info how much my face moved.

For example, my video's width = 600px, height = 400 px. Then I want to get my eyes position, like my left eye position is 200px from right, 300px from the bottom. That is my left eye's first position. After setting the first position, if I moved, the app shows an alert or pop up window.

I am using face-api library. https://github./justadudewhohacks/face-api.js

I am trying to get the face position inside the video.

I would like to make an application. It can set my face first position. And then it can give info how much my face moved.

For example, my video's width = 600px, height = 400 px. Then I want to get my eyes position, like my left eye position is 200px from right, 300px from the bottom. That is my left eye's first position. After setting the first position, if I moved, the app shows an alert or pop up window.

Share Improve this question edited Nov 5, 2019 at 21:47 Nao asked Nov 5, 2019 at 13:01 NaoNao 1,4732 gold badges8 silver badges8 bronze badges 2
  • Are you asking about Azure Face API? IF you do , please read the docs, it's all there azure.microsoft./en-au/services/cognitive-services/face – yeya Commented Nov 5, 2019 at 13:57
  • Oh, I asked about this. github./justadudewhohacks/face-api.js Thank you though. – Nao Commented Nov 5, 2019 at 21:46
Add a ment  | 

1 Answer 1

Reset to default 9

First of all, create the video, stream and load all the models. Make sure you load all models in Promise.all() method.

You can both set the Face Detection and Face Landmarks like this:

video.addEventListener('play', () => {
    // Create canvas from our video element
    const canvas = faceapi.createCanvasFromMedia(video);
    document.body.append(canvas);
    // Current size of our video
    const displaySize = { width: video.width, height: video.height }
    faceapi.matchDimensions(canvas, displaySize);
    // run the code multiple times in a row --> setInterval
    //  async func 'cause it's a async library
    setInterval(async () => {
        // Every 100ms, get all the faces inside of the webcam image to video element
        const detections = await faceapi.detectAllFaces(video, 
        new faceapi.TinyFaceDetectorOptions())
        .withFaceLandmarks().withFaceExpressions();
        // boxes will size properly for the video element
        const resizedDetections = faceapi.resizeResults(detections, displaySize);
        // get 2d context and clear it from 0, 0, ...
        canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
        faceapi.draw.drawDetections(canvas, resizedDetections);
        faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);
        faceapi.draw.drawFaceExpressions(canvas, resizedDetections);
    }, 100)
});

Then you can retrieve the Face Landmark points and contours.

This is for all Face Landmark positions:

const landmarkPositions = landmarks.positions

This for the positions of individual marks:

// only available for 68 point face landmarks (FaceLandmarks68)
const jawOutline = landmarks.getJawOutline();
const nose = landmarks.getNose();
const mouth = landmarks.getMouth();
const leftEye = landmarks.getLeftEye();
const rightEye = landmarks.getRightEye();
const leftEyeBrow = landmarks.getLeftEyeBrow();
const rightEyeBrow = landmarks.getRightEyeBrow();

For the position of the left eye, you can create an async function in video.addEventListener and get the first position of your left eye:

video.addEventListener('play', () => {
    ...
    async function leftEyePosition() {
         const landmarks = await faceapi.detectFaceLandmarks(video)
         const leftEye = landmarks.getLeftEye();
         console.log("Left eye position ===>" + JSON.stringify(leftEye));
    }
});

本文标签: javascriptHow to get the face position detection inside video using faceapiStack Overflow