This commit is contained in:
telereview
2023-03-23 19:06:51 +01:00
120 changed files with 5753 additions and 3337 deletions

30
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,30 @@
{
"configurations": [
{
"name": "Docker Node.js Launch",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"platform": "node",
"node": {
"package": "${workspaceFolder}/code/reviews_api/package.json",
"localRoot": "${workspaceFolder}/code/reviews_api"
}
},
{
"name": "Docker: Python - General",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"projectType": "general"
}
}
]
}

46
.vscode/tasks.json vendored Normal file
View File

@@ -0,0 +1,46 @@
{
"version": "2.0.0",
"tasks": [
{
"type": "docker-build",
"label": "docker-build",
"platform": "node",
"dockerBuild": {
"dockerfile": "${workspaceFolder}/code/reviews_api/Dockerfile",
"context": "${workspaceFolder}/code/reviews_api",
"pull": true
},
"node": {
"package": "${workspaceFolder}/code/reviews_api/package.json"
}
},
{
"type": "docker-run",
"label": "docker-run: release",
"dependsOn": [
"docker-build"
],
"platform": "node",
"node": {
"package": "${workspaceFolder}/code/reviews_api/package.json"
}
},
{
"type": "docker-run",
"label": "docker-run: debug",
"dependsOn": [
"docker-build"
],
"dockerRun": {
"env": {
"DEBUG": "*",
"NODE_ENV": "development"
}
},
"node": {
"package": "${workspaceFolder}/code/reviews_api/package.json",
"enableDebugging": true
}
}
]
}

View File

@@ -1,23 +0,0 @@
Ceci est votre dépôt pour le projet PACT.
Vous **DEVEZ** modifier ce fichier (`README.md`) et créer tous les
répertoires et fichiers dont vous aurez besoin pour votre projet.
# Important, le rapport d'avancement
Le répertoire `rapport` contient un squelette pour votre rapporte d'avancement.
Ce répertoire **ne doit pas être renommé** ni le fichier `README.adoc` qu'il contient.
Le fichier `README.adoc` est le point d'entrée du rapport.
Il est rédigé en utilisant le langage [**AsciiDoc**](http://asciidoc.org/).
La syntaxe est supportée par GitLab qui le formatera correctement dans l'interface Web.
Le document final sera généré en utilisant l'outil [Asciidoctor](http://asciidoctor.org/) qui supporte les mêmes extensions que GitLab (pour les équations par exemple).
Un résumé de la syntaxe supportée est accessible [ici](http://asciidoctor.org/docs/asciidoc-syntax-quick-reference/).
Vous pouvez éditer les différents fichiers en utilisant *votre éditeur de texte favori*.
Si vous n'en avez pas, vous pouvez, par exemple utiliser:
- [**Visual Studio Code**](https://code.visualstudio.com/) avec l'extension [AsciiDoc](https://marketplace.visualstudio.com/items?itemName=asciidoctor.asciidoctor-vscode) qui ajoute coloration syntaxique et rendu en temps réel.
- Ou bien sûr votre éditeur de text préféré **Sublim Text**, **Vim**, **Emacs**,…
- Une extension pour navigateur Web est aussi disponible pour visualiser le résultat ([**ici**](https://github.com/asciidoctor/asciidoctor-browser-extension)).

View File

@@ -0,0 +1,59 @@
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
cap = cv2.VideoCapture(0)
with mp_face_mesh.FaceMesh(
max_num_faces=1,
refine_landmarks=True,
min_detection_confidence=0.5,
min_tracking_confidence=0.5) as face_mesh:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
continue
# Initialize the face mesh model
face_mesh = mp_face_mesh.FaceMesh(static_image_mode=False, max_num_faces=1, min_detection_confidence=0.5)
# Load the input image
# lecture de la vidéo
ret, frame = cap.read()
# conversion de l'image en RGB
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Process the image and extract the landmarks
results = face_mesh.process(image)
if results.multi_face_landmarks:
landmarks = results.multi_face_landmarks[0]
# Define the landmark indices for the corners of the eyes and the tip of the nose
left_eye = [33, 133, 246, 161, 160, 159, 158, 157, 173, 133]
right_eye = [362, 263, 373, 380, 381, 382, 384, 385, 386, 362]
nose_tip = 4
# Calculate the distance between the eyes and the nose tip
left_eye_x = landmarks.landmark[left_eye[0]].x * image.shape[1]
right_eye_x = landmarks.landmark[right_eye[0]].x * image.shape[1]
nose_x = landmarks.landmark[nose_tip].x * image.shape[1]
eye_distance = abs(left_eye_x - right_eye_x)
nose_distance = abs(nose_x - (left_eye_x + right_eye_x) / 2)
# Determine the gender based on the eye and nose distances
if eye_distance > 1.5 * nose_distance:
gender = "Female"
else:
gender = "Male"
# Draw the landmarks on the image
cv2.putText(image, gender, (10, 50),cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
# affichage de la vidéo
cv2.imshow('Video', cv2.cvtColor(image, cv2.COLOR_RGB2BGR))
if cv2.waitKey(10) & 0xFF == ord('q'):
break
# libération de la caméra et des ressources
cap.release()
cv2.destroyAllWindows()

View File

@@ -0,0 +1,88 @@
import cv2
import numpy as np
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_hands = mp.solutions.hands
def prodScalaire(V1,V2):
return V1[0]*V2[0]+V1[1]*V2[1]/(np.sqrt(V1[0]**2+V1[1]**2)*np.sqrt(V2[0]**2+V2[1]**2))
def reconnaissancePouce(handLandmarks):
etatDuPouce=["neutre","baissé","levé"]
i=0
j=0
for cpt in range (0,4):
V1=[handLandmarks[(4*cpt)+6][0]-handLandmarks[(4*cpt)+5][0],handLandmarks[(4*cpt)+6][1]-handLandmarks[(4*cpt)+5][1]]
V2=[handLandmarks[(4*cpt)+8][0]-handLandmarks[(4*cpt)+6][0],handLandmarks[(4*cpt)+8][1]-handLandmarks[(4*cpt)+6][1]]
j=np.dot(V1,V2)
if (j>0.005):
return etatDuPouce[0]
V1=[handLandmarks[4][0]-handLandmarks[1][0],handLandmarks[4][1]-handLandmarks[1][1]]
V2=[handLandmarks[2][0]-handLandmarks[1][0],handLandmarks[2][1]-handLandmarks[1][1]]
if((np.dot(V1,V2))>0 and (handLandmarks[4][1]>handLandmarks[2][1])):
i=1
elif(np.dot(V1,V2)>0 and handLandmarks[4][1]<handLandmarks[2][1]):
i=2
return etatDuPouce[i]
cap = cv2.VideoCapture(0)
with mp_hands.Hands(
model_complexity=0,
min_detection_confidence=0.5,
min_tracking_confidence=0.5) as hands:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(
image,
hand_landmarks,
mp_hands.HAND_CONNECTIONS,
mp_drawing_styles.get_default_hand_landmarks_style(),
mp_drawing_styles.get_default_hand_connections_style())
# Set variable to keep landmarks positions (x and y)
handLandmarks = []
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
# Fill list with x and y positions of each landmark
for landmarks in hand_landmarks.landmark:
handLandmarks.append([landmarks.x, landmarks.y])
cv2.putText(image, reconnaissancePouce(handLandmarks), (50, 450), cv2.FONT_HERSHEY_SIMPLEX, 3, (255, 0, 0), 10)
# Flip the image horizontally for a selfie-view display.
cv2.imshow('MediaPipe Hands', cv2.flip(image, 1))
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
""" etatDuPouce=["neutre","baissé","levé"]
i=0
if results.multi_hand_landmarks:
if(results.multi_hand_landmarks.gestures.categories[0].categoryName==Thumb_Up):
cv2.putText(image, str(results.multi_hand_landmarks.gestures.categories[0].categoryName), (50, 450), cv2.FONT_HERSHEY_SIMPLEX, 3, (255, 0, 0), 10)
else:
cv2.putText(image, "raté", (50, 450), cv2.FONT_HERSHEY_SIMPLEX, 3, (255, 0, 0), 10)
"""

View File

@@ -1,75 +0,0 @@
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
trainSet = datasets.ImageFolder(r'C:\Users\kesha\Desktop\TelecomParis\PACT\DownloadedDataset\train',
transform = transforms.ToTensor())
valSet = datasets.ImageFolder(r'C:\Users\kesha\Desktop\TelecomParis\PACT\DownloadedDataset\val',
transform = transforms.ToTensor())
trainloader = torch.utils.data.DataLoader(trainSet,
batch_size = 50,
shuffle = True)
valloader = torch.utils.data.DataLoader(valSet,
batch_size = 50,
shuffle = True)
class Net(nn.Module):
def __init__(self):
super().__init__()
#nn.Conv2d(channels_in, out_channels/number of filters, kernel size)
self.conv1 = nn.Conv2d(3, 16, 3)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 32, 3)
self.conv3 = nn.Conv2d(32, 64, 3)
self.fc1 = nn.Linear(64*14*14, 16)
self.fc2 = nn.Linear(16, 6)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
#size = 16*126*126 then 16*63*63
x = self.pool(F.relu(self.conv2(x)))
#size = 32*61*61 then 32*30*30
x = self.pool(F.relu(self.conv3(x)))
#size = 64*28*28 then 64*14*14
x = torch.flatten(x, 1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
net = Net()
print(net)
criterion = nn.CrossEntropyLoss()
optimizer = optim.RMSprop(net.parameters(), lr=0.001)
device = torch.device('cuda')
for epoch in range(1, 7):
print('Starting epoch ' + str(epoch))
current_loss = 0
Epoch = []
Loss = []
for i, data in enumerate(trainloader, 0):
inputs, labels = data
#très important
optimizer.zero_grad()
output = net(inputs)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
current_loss += loss.item()
print('epoch: ', epoch, " loss: ", current_loss)
Loss.append(current_loss)
Epoch.append(epoch)
plt.plot(Epoch, Loss)
plt.title('Valeur de la fonction cost en fonction de l\'epoch')
plt.show()
#to save a model: torch.save(net.state_dict(), file_location)

View File

@@ -0,0 +1,27 @@
**/__pycache__
**/.venv
**/.classpath
**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/bin
**/charts
**/docker-compose*
**/compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
LICENSE
README.md

View File

@@ -0,0 +1 @@
*.wav

View File

@@ -0,0 +1,19 @@
FROM python:3.8
#Ne pas créer les fichiers .pyc
ENV PYTHONDONTWRITEBYTECODE=1
#Afficher les logs directement dans le terminal
ENV PYTHONUNBUFFERED=1
#Installation des dépendances de opencv
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 portaudio19-dev python3-pyaudio pulseaudio -y
# Installation des dépendances python
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
# Création du répertoire de travail
WORKDIR /app
COPY . /app
CMD ["python", "main.py"]

Binary file not shown.

View File

@@ -0,0 +1,22 @@
{
"ennuyant": {
"grade": 2,
"display": "Ennuyant"
},
"genial": {
"grade": 9,
"display": "Génial"
},
"j_ai_beaucoup_aime": {
"grade": 9,
"display": "J'ai beaucoup aimé"
},
"j_ai_trouve_ca_genial": {
"grade": 10,
"display": "J'ai trouvé ça génial"
},
"nul": {
"grade": 0,
"display": "Nul"
}
}

View File

@@ -0,0 +1,145 @@
import librosa
import os
import numpy as np
import scipy.spatial.distance as dist
import pyaudio
import wave
import json
def dp(distmat):
N,M = distmat.shape
# Initialisons the cost matrix
costmat =np.zeros((N+1,M+1))
for i in range (1,N+1):
costmat[i,0]=np.inf
for i in range (1,M+1):
costmat[0,i]=np.inf
for i in range (N):
for j in range (M):
#on calcule le cout minimal pour chaque chemin.pour atteindre the costmat[i][j] il y a trois chemins possibles on choisit celui de cout minimal
penalty = [
costmat[i,j], # cas T==0
costmat[i,j+1] , # cas T==1
costmat[i+1,j]] # cas T==2
ipenalty = np.argmin(penalty)
costmat[i+1,j+1] = distmat[i,j] + penalty[ipenalty]
#enlever les valeurs de l infini
costmat = costmat[1: , 1:]
return (costmat, costmat[-1, -1]/(N+M))
def calculate_mfcc(audio, sr):
# Define parameters for MFCC calculation
n_mfcc = 13
n_fft = 2048
hop_length = 512
fmin = 0
fmax = sr/2
# Calculate MFCCs
mfccs = librosa.feature.mfcc(y=audio, sr=sr, n_mfcc=n_mfcc, n_fft=n_fft, hop_length=hop_length, fmin=fmin, fmax=fmax)
return mfccs.T
def calculate_dtw_cost(mfccs_query , mfccs_train):
distmat = dist.cdist(mfccs_query, mfccs_train,"cosine")
costmat,mincost = dp(distmat)
return mincost
def recognize_speech(audio_query, audio_train_list, sr):#sr frequence d echantillonnage
# Calculate MFCCs for query audio
mfccs_query = calculate_mfcc(audio_query, sr)
# Calculate DTW cost for each audio in training data
dtw_costs = []
for audio_train in audio_train_list:
mfccs_train = calculate_mfcc(audio_train, sr)
mincost = calculate_dtw_cost(mfccs_query, mfccs_train)
dtw_costs.append(mincost)
# Find index of word with lowest DTW cost
index = np.argmin(dtw_costs)
# Return recognized word
return index
def record_audio(filename, duration, sr):
chunk = 1024
sample_format = pyaudio.paInt16
channels = 1
record_seconds = duration
filename = f"{filename}.wav"
p = pyaudio.PyAudio()
stream = p.open(format=sample_format,
channels=channels,
rate=sr,
frames_per_buffer=chunk,
input=True)
frames = []
print(f"Enregistrement en cours...")
for i in range(0, int(sr / chunk * record_seconds)):
data = stream.read(chunk)
frames.append(data)
stream.stop_stream()
stream.close()
p.terminate()
print("Enregistrement terminé")
wf = wave.open(filename, "wb")
wf.setnchannels(channels)
wf.setsampwidth(p.get_sample_size(sample_format))
wf.setframerate(sr)
wf.writeframes(b"".join(frames))
wf.close()
print(f"Fichier enregistré sous {filename}")
def coupe_silence(signal):
t = 0
if signal[t] == 0 :
p = 0
while signal[t+p] == 0 :
if p == 88 :
signal = signal[:t] + signal[t+p:]
coupe_silence(signal)
else :
p = p+1
def init_database():
data_dir = "audio_data/"
words = []
files = []
for word in os.listdir(data_dir):
if not os.path.isfile(os.path.join(data_dir, word)):
for file in os.listdir(os.path.join(data_dir,word)):
if os.path.isfile(os.path.join(data_dir, word,file)):
print(word,os.path.join(data_dir, word,file))
words.append(word)
files.append(os.path.join(data_dir, word,file))
return words,files
def get_word_metadata(word):
with open("audio_data/metadata.json") as f:
data = json.loads(f.read())
return data[word]
#Todo : detecte si pas de note donnée
def get_grade():
sr = 44100 # fréquence d'échantillonnage
duration = 6 # durée d'enregistrement en secondes
filename = "recording" # nom du fichier à enregistrer
data_dir = "audio_data/"
record_audio(filename, duration, sr)
audio_query, sr = librosa.load(f'{filename}.wav', sr=sr)
coupe_silence(audio_query)
words, files = init_database()
audio_train_list = [librosa.load(file, sr=sr)[0] for file in files]
recognized_word_index = recognize_speech(audio_query, audio_train_list, sr)
recognized_word = words[recognized_word_index]
return get_word_metadata(recognized_word)

View File

@@ -0,0 +1,97 @@
import cv2
import mediapipe as mp
import numpy as np
class HandDetector():
def __init__(self):
self.mp_drawing = mp.solutions.drawing_utils
self.mp_drawing_styles = mp.solutions.drawing_styles
self.mp_hands = mp.solutions.hands
self.cap = cv2.VideoCapture(0)
self.hands = self.mp_hands.Hands(
model_complexity=0,
min_detection_confidence=0.5,
min_tracking_confidence=0.5)
#Paramètres
self.BUFFER_LENGTH = 30
self.DETECTION_THRESHOLD = 3/4
self.resultBuffer = []
def reset(self):
self.resultBuffer = []
def reconnaissancePouce(self,handLandmarks):
etatDuPouce=["neutre","thumbs_down","thumbs_up"]
i=0
j=0
for cpt in range (0,4):
V1=[handLandmarks[(4*cpt)+6][0]-handLandmarks[(4*cpt)+5][0],handLandmarks[(4*cpt)+6][1]-handLandmarks[(4*cpt)+5][1]]
V2=[handLandmarks[(4*cpt)+8][0]-handLandmarks[(4*cpt)+6][0],handLandmarks[(4*cpt)+8][1]-handLandmarks[(4*cpt)+6][1]]
j=np.dot(V1,V2)
if (j>0.005):
return etatDuPouce[0]
V1=[handLandmarks[4][0]-handLandmarks[1][0],handLandmarks[4][1]-handLandmarks[1][1]]
V2=[handLandmarks[2][0]-handLandmarks[1][0],handLandmarks[2][1]-handLandmarks[1][1]]
if((np.dot(V1,V2))>0 and (handLandmarks[4][1]>handLandmarks[2][1])):
i=1
elif(np.dot(V1,V2)>0 and handLandmarks[4][1]<handLandmarks[2][1]):
i=2
return etatDuPouce[i]
def detect(self):
if self.cap.isOpened():
success, image = self.cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
return False
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
results = self.hands.process(image)
# print(results)
if results.multi_hand_landmarks:
handsPositions = []
for hand_landmarks in results.multi_hand_landmarks:
handLandmarks = []
# Fill list with x and y positions of each landmark
for landmarks in hand_landmarks.landmark:
handLandmarks.append([landmarks.x, landmarks.y])
#On ajoute la position de chaque mains a une liste
handsPositions.append(self.reconnaissancePouce(handLandmarks))
#On calcule le résultat suivant la position des deux mains
if(len(handsPositions) == 2):
if(handsPositions[0] == handsPositions[1]):
thumbState = handsPositions[0]
elif(handsPositions[0] == "neutre"):
thumbState = handsPositions[1]
elif(handsPositions[1] == "neutre"):
thumbState = handsPositions[0]
else:
thumbState = "neutre"
else:
thumbState = handsPositions[0]
self.resultBuffer.append(thumbState)
if(len(self.resultBuffer) > self.BUFFER_LENGTH):
self.resultBuffer.pop(0)
thumbsUpCount = sum(map(lambda x : x == "thumbs_up", self.resultBuffer))
thumbsDownCount = sum(map(lambda x : x == "thumbs_down", self.resultBuffer))
if(thumbsUpCount > self.DETECTION_THRESHOLD * self.BUFFER_LENGTH):
result = "thumbs_up"
elif(thumbsDownCount > self.DETECTION_THRESHOLD * self.BUFFER_LENGTH):
result = "thumbs_down"
else:
result = False
if(thumbState != "neutre"):
return thumbState, handLandmarks[9], np.linalg.norm(np.array(handLandmarks[9]) - np.array(handLandmarks[0])), result
return False

View File

@@ -0,0 +1,5 @@
from manager import Manager
if __name__ == "__main__":
print("backend started")
m = Manager()
m.loop()

View File

@@ -0,0 +1,92 @@
from hand_detector import HandDetector
from audio_detector import get_grade
from network import ApiClient, WebsocketServer
import time
#Classe qui coordonne les différents modules et qui s'occupe de construire l'avis au fur et a mesure
class Manager():
def __init__(self):
self.state = 0
self.defualtAvis = {
"note": None,
"commentaire": None,
"notes_autres": {}
}
self.TIMEOUT_CAMERA = 5
self.avis = self.defualtAvis
self.server = WebsocketServer(None)
self.server.start()
self.handDetector = HandDetector()
self.api = ApiClient()
self.timeLastChange = time.time()
self.isLastHandPacketEmpty = False
print("Backend ready")
#Boucle principale
def loop(self):
while(True):
if(self.state == 0):
self.sleep()
if(self.state == 1):
self.camera()
if(self.state == 2):
self.audio()
if(self.state == 3):
self.thankYou()
time.sleep(0.01)
#Fonction qui est executée pendant que la borne est en veille, reveille la borne si une main est detectée
def sleep(self):
res = self.handDetector.detect()
if(res != False):
self.state = 1
self.timeLastChange = time.time()
self.server.sendMessage({"type": "state", "state": 1})
#Envoie la position de la main a l'écran et passe a l'étape suivante si une main est detectée pendant assez longtemps
def camera(self):
if(time.time() - self.timeLastChange > self.TIMEOUT_CAMERA):
self.server.sendMessage({"type":"reset"})
self.reset()
return
res = self.handDetector.detect()
if(res != False):
state, coords, size, finalDecision = res
self.server.sendMessage({"type": "effects", "effects": [{"type": state, "x":coords[0], "y": coords[1], "width": size, "height": size}]})
self.isLastHandPacketEmpty = False
if(finalDecision != False):
self.avis["note"] = 10 if finalDecision == "thumbs_up" else 0
self.state = 2
self.timeLastChange = time.time()
self.server.sendMessage({"type": "state", "state": 2})
elif self.isLastHandPacketEmpty == False:
self.server.sendMessage({"type":"effects","effects":[]})
self.isLastHandPacketEmpty = True
def audio(self):
result = get_grade()
if(result != False):
self.server.sendMessage({"type":"new_grade","word":result["display"]})
self.avis["notes_autres"]["test"] = result["grade"]
time.sleep(3)
self.state = 3
self.timeLastChange = time.time()
self.server.sendMessage({"type": "state", "state": 3})
def thankYou(self):
time.sleep(10)
print("Reseting...")
self.timeLastChange = time.time()
self.server.sendMessage({"type": "state", "state": 0})
res = self.api.send(self.avis["note"],self.avis["notes_autres"]["test"])
print(res.text)
self.reset()
def reset(self):
self.state = 0
self.avis = self.defualtAvis
self.handDetector.reset()

View File

@@ -0,0 +1,49 @@
import requests
import asyncio
import json
import os
import threading
import websockets
class WebsocketServer(threading.Thread):
def __init__(self, onMessage, port=os.getenv("PORT"), host=os.getenv("HOST")):
threading.Thread.__init__(self)
self.host = host
self.port = port
self.messageQueue = []
self.onMessage = onMessage
def run(self):
print("server thread started")
asyncio.run(self.runServer())
async def runServer(self):
async with websockets.serve(self.handler, self.host, self.port):
await asyncio.Future()
async def handler(self,websocket):
while True:
for msg in self.messageQueue:
# print("sending", json.dumps(msg))
await websocket.send(json.dumps(msg))
self.messageQueue.pop(0)
await asyncio.sleep(0.01)
def sendMessage(self,message):
self.messageQueue.append(message)
class ApiClient():
def __init__(self, host=os.getenv("API_HOST"), port=os.getenv("API_PORT")):
self.host = host
self.port = port
def send(self,note,note_autre):
#Exemple ajout d'un commentaire depuis la borne (site ou geste)
avis = {
"note": note,
"source": "borne",
"commentaire":"",
#Optionel
"notes_autre": '{"proprete":'+str(note_autre)+',"calme":10}',
}
return requests.post("http://"+self.host+":"+self.port+"/add_review", data=avis)

Binary file not shown.

View File

@@ -0,0 +1,8 @@
websockets
requests
opencv-python
mediapipe
numpy
pyaudio
librosa
scipy

View File

@@ -1,27 +1,12 @@
-- phpMyAdmin SQL Dump
-- version 4.9.5deb2
-- https://www.phpmyadmin.net/
--
-- Host: localhost:3306
-- Generation Time: Dec 26, 2022 at 10:31 AM
-- Server version: 8.0.31-0ubuntu0.20.04.1
-- PHP Version: 7.4.3
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
SET AUTOCOMMIT = 0;
START TRANSACTION;
SET time_zone = "+00:00";
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;
--
-- Database: `telereview`
--
CREATE DATABASE IF NOT EXISTS `telereview` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
CREATE DATABASE IF NOT EXISTS `telereview`;
USE `telereview`;
-- --------------------------------------------------------
@@ -32,9 +17,9 @@ USE `telereview`;
CREATE TABLE `borne_auteurs` (
`id` int NOT NULL,
`sexe` tinytext CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci,
`sexe` tinytext ,
`age` tinyint DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -49,7 +34,7 @@ CREATE TABLE `borne_avis` (
`note_principale` tinyint NOT NULL,
`commentaire` text NOT NULL,
`source_id` int NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -60,7 +45,7 @@ CREATE TABLE `borne_avis` (
CREATE TABLE `borne_criteres` (
`id` int NOT NULL,
`nom` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
--
-- Dumping data for table `borne_criteres`
@@ -83,7 +68,7 @@ CREATE TABLE `borne_notes_autre` (
`critere_id` int NOT NULL,
`avis_id` int NOT NULL,
`note` int NOT NULL COMMENT 'Note sur 10'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -96,7 +81,7 @@ CREATE TABLE `reseaux_sociaux_auteurs` (
`nom_utilisateur` text NOT NULL,
`source_id` int NOT NULL,
`lien` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -109,10 +94,10 @@ CREATE TABLE `reseaux_sociaux_avis` (
`date` date NOT NULL,
`source_id` int NOT NULL,
`note` tinyint DEFAULT NULL,
`commentaire` text CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci,
`commentaire` text ,
`auteur_id` int NOT NULL,
`lien_source` text CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
`lien_source` text
) ;
-- --------------------------------------------------------
@@ -123,7 +108,7 @@ CREATE TABLE `reseaux_sociaux_avis` (
CREATE TABLE `sources` (
`id` int NOT NULL,
`nom` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
--
-- Dumping data for table `sources`
@@ -131,8 +116,7 @@ CREATE TABLE `sources` (
INSERT INTO `sources` (`id`, `nom`) VALUES
(1, 'website'),
(2, 'borne'),
(3, 'instagram');
(2, 'borne');
-- --------------------------------------------------------
@@ -145,7 +129,7 @@ CREATE TABLE `stats_autres_annee` (
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`critere_id` int NOT NULL,
`note` float NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -158,7 +142,7 @@ CREATE TABLE `stats_autres_jour` (
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`critere_id` int NOT NULL,
`note` float NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -171,7 +155,7 @@ CREATE TABLE `stats_autres_mois` (
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`critere_id` int NOT NULL,
`note` float NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -184,7 +168,7 @@ CREATE TABLE `stats_autres_semaine` (
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`critere_id` int NOT NULL,
`note` float NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
) ;
-- --------------------------------------------------------
@@ -195,12 +179,13 @@ CREATE TABLE `stats_autres_semaine` (
CREATE TABLE `stats_general_annee` (
`id` int NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`moyenne_globale` float NOT NULL,
`moyenne_site` float NOT NULL,
`moyenne_borne` float NOT NULL,
`dist_age` text NOT NULL COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text NOT NULL COMMENT 'Distribution du sexe des auteurs'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
`nb_avis` int NOT NULL,
`moyenne_globale` float DEFAULT NULL,
`moyenne_site` float DEFAULT NULL,
`moyenne_borne` float DEFAULT NULL,
`dist_age` text DEFAULT NULL COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text DEFAULT NULL COMMENT 'Distribution du sexe des auteurs'
) ;
-- --------------------------------------------------------
@@ -211,12 +196,13 @@ CREATE TABLE `stats_general_annee` (
CREATE TABLE `stats_general_jour` (
`id` int NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`nb_avis` int NOT NULL,
`moyenne_globale` float DEFAULT NULL,
`moyenne_site` float DEFAULT NULL,
`moyenne_borne` float DEFAULT NULL,
`dist_age` text CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci COMMENT 'Distribution du sexe des auteurs'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
`dist_age` text DEFAULT NULL COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text DEFAULT NULL COMMENT 'Distribution du sexe des auteurs'
) ;
-- --------------------------------------------------------
@@ -227,12 +213,13 @@ CREATE TABLE `stats_general_jour` (
CREATE TABLE `stats_general_mois` (
`id` int NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`moyenne_globale` float NOT NULL,
`moyenne_site` float NOT NULL,
`moyenne_borne` float NOT NULL,
`dist_age` text NOT NULL COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text NOT NULL COMMENT 'Distribution du sexe des auteurs'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
`nb_avis` int NOT NULL,
`moyenne_globale` float DEFAULT NULL,
`moyenne_site` float DEFAULT NULL,
`moyenne_borne` float DEFAULT NULL,
`dist_age` text DEFAULT NULL COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text DEFAULT NULL COMMENT 'Distribution du sexe des auteurs'
) ;
-- --------------------------------------------------------
@@ -243,12 +230,13 @@ CREATE TABLE `stats_general_mois` (
CREATE TABLE `stats_general_semaine` (
`id` int NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`moyenne_globale` float NOT NULL,
`moyenne_site` float NOT NULL,
`moyenne_borne` float NOT NULL,
`dist_age` text NOT NULL COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text NOT NULL COMMENT 'Distribution du sexe des auteurs'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
`nb_avis` int NOT NULL,
`moyenne_globale` float DEFAULT NULL,
`moyenne_site` float DEFAULT NULL,
`moyenne_borne` float DEFAULT NULL,
`dist_age` text DEFAULT NULL COMMENT 'Distribution de l''age des auteurs',
`dist_sexe` text DEFAULT NULL COMMENT 'Distribution du sexe des auteurs'
) ;
--
-- Indexes for dumped tables
@@ -438,7 +426,3 @@ ALTER TABLE `stats_general_mois`
ALTER TABLE `stats_general_semaine`
MODIFY `id` int NOT NULL AUTO_INCREMENT;
COMMIT;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

112
code/docker-compose.yaml Normal file
View File

@@ -0,0 +1,112 @@
version: "3.9"
services:
#Base de donnée mysql de la borne sur laquelle est stockée tous les avis et les stats
db:
image: mysql:latest
container_name: db
expose:
- 3306
volumes:
- ./db:/docker-entrypoint-initdb.d
restart: always
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost", "-uroot"] # Command to check health.
interval: 5s # Interval between health checks.
timeout: 5s # Timeout for each health checking.
retries: 20 # Hou many times retries.
start_period: 10s # Estimated time to boot.
environment:
MYSQL_ROOT_PASSWORD: telereview
MYSQL_DATABASE: telereview
#Interface d'aministration pour la bdd
phpmyadmin:
image: phpmyadmin:latest
restart: always
container_name: phpmyadmin
depends_on:
db:
condition: service_healthy
environment:
PMA_ARBITRARY: 1
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: telereview
ports:
- 8000:80
#API de gestion des avis, permet d'ajouter ou de récuperer des avis ou les stats sur les avis par des requêtes HTTP
reviews_api:
container_name: reviews_api
expose:
- 8080
ports:
- 8080:8080
environment:
- NODE_ENV=production
- DB_USER=root
- DB_PASSWORD=telereview
- DB_HOST=db
- DB_NAME=telereview
- PORT=8080
depends_on:
db:
condition: service_healthy
build: ./reviews_api
restart: always
# Serveur web de l'interface de la borne
interface_borne:
image: httpd:latest
volumes:
- ./interface_borne:/usr/local/apache2/htdocs/
container_name: interface_borne
ports:
- 8888:80
#Serveur web de l'interface admin
interface_admin:
image: httpd:latest
volumes:
- ./interface_admin/out:/usr/local/apache2/htdocs/
container_name: interface_admin
ports:
- 800:80
#Formulaire de retour d'avis
formulaire:
image: httpd:latest
volumes:
- ./formulaire:/usr/local/apache2/htdocs/
container_name: formulaire
ports:
- 80:80
# #Backend de la borne : scripts pythons de reconnaissances video et audio
# #Envoient les infos a l'interface de la borne par websocket pour mettre a jour l'interface rapidement
# #Met a jour les avis en faisant des requêtes a l'API
backend_reconnaissance:
build: ./backend_reconnaissance
container_name: backend_reconnaissance
restart: always
devices:
- /dev/video3:/dev/video0
- /dev/snd:/dev/snd
environment:
- PORT=5000
- HOST=backend_reconnaissance
- API_HOST=reviews_api
- API_PORT=8080
ports:
#Ce container est le serveur websocker dont le client est l'interface de la borne qui tourne dans le navigateur
- 5000:5000
user: root
video_loopback:
build: ./video_loopback
container_name: video_loopback
restart: always
devices:
- /dev/video0:/dev/video0
- /dev/video2:/dev/video1
- /dev/video3:/dev/video2

3
code/interface_admin/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
.next
package-lock.json
out

View File

@@ -0,0 +1,34 @@
This is a [Next.js](https://nextjs.org/) project bootstrapped with [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app).
## Getting Started
First, run the development server:
```bash
npm run dev
# or
yarn dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
You can start editing the page by modifying `pages/index.js`. The page auto-updates as you edit the file.
[API routes](https://nextjs.org/docs/api-routes/introduction) can be accessed on [http://localhost:3000/api/hello](http://localhost:3000/api/hello). This endpoint can be edited in `pages/api/hello.js`.
The `pages/api` directory is mapped to `/api/*`. Files in this directory are treated as [API routes](https://nextjs.org/docs/api-routes/introduction) instead of React pages.
## Learn More
To learn more about Next.js, take a look at the following resources:
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js/) - your feedback and contributions are welcome!
## Deploy on Vercel
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details.

View File

@@ -0,0 +1,58 @@
import React from 'react'
import { Card, Col, Row, Table } from 'react-bootstrap';
import { BsPersonFill } from 'react-icons/bs';
import styles from '../styles/Avis.module.css'
export default function Avis({review}) {
const {date, note_principale,notes_autres, commentaire, sexe_auteur, nom_source, age_auteur} = review;
return (
<Card>
<Card.Title>Avis</Card.Title>
<Card.Body>
<Row>
<h2>Auteur</h2>
<Col xs={1}>
<BsPersonFill className={styles.personIcon} />
</Col>
<Col className='d-flex flex-column'>
<p>Age : {age_auteur}</p>
<p>Sexe : {sexe_auteur}</p>
<p>Date de publication : {date}</p>
<p>Source : {nom_source}</p>
</Col>
</Row>
<Row>
<h2>Notes</h2>
<Table>
<thead>
<tr>
<th>Critère</th>
<th>Note</th>
</tr>
</thead>
<tbody>
<tr>
<td>Général</td>
<td>{note_principale} / 10</td>
</tr>
{notes_autres && notes_autres.map(({ critere, note }) => {
return <tr key={critere}>
<td>{critere}</td>
<td>{note}/10</td>
</tr>
})}
</tbody>
</Table>
</Row>
<Row>
<Card>
<Card.Header>Commentaire</Card.Header>
<Card.Body>
{commentaire}
</Card.Body>
</Card>
</Row>
</Card.Body>
</Card>
)
}

View File

@@ -0,0 +1,34 @@
import { useRouter } from 'next/router';
import React from 'react'
import { Table } from 'react-bootstrap'
import styles from '../styles/AvisList.module.css'
export default function AvisList({ avis }) {
const router = useRouter();
function handleClick(id) {
router.push(`/avis/${id}`);
}
return (
<Table>
<thead>
<tr>
<th>Date</th>
<th>Note globale</th>
<th>Commentaire</th>
<th>Source</th>
</tr>
</thead>
<tbody>
{avis.map(({ id, note_principale, commentaire, date, nom_source }) => {
return <tr onClick={() => handleClick(id)} key={id} className={styles.row}>
<td>{date}</td>
<td>{note_principale} / 10</td>
<td>{commentaire}</td>
<td>{nom_source}</td>
</tr>
})}
</tbody>
</Table>
)
}

View File

@@ -0,0 +1,41 @@
import React from 'react'
import { Bar } from 'react-chartjs-2'
import Chart from 'chart.js/auto'; //NE SURTOUT PAS SUPPRIMER CET IMPORT
export default function ComparativeBarChart({ xlabels, data0, label0, data1, label1}) {
return (
<Bar
options={{
responsive: true,
interaction: {
intersect: false,
},
scales: {
x: {
stacked: true,
},
y: {
stacked: true
}
}
}}
data={{
labels: xlabels,
datasets: [
{
label: label0,
data: data0,
backgroundColor: "#FF3B30",
stack: "stack0"
},
{
label: label1,
data: data1,
backgroundColor: "#0000FF",
stack: "stack1"
}
]
}}
/>
)
}

View File

@@ -0,0 +1,23 @@
import React from 'react'
import Link from 'next/link'
import Container from 'react-bootstrap/Container';
import Nav from 'react-bootstrap/Nav';
import Navbar from 'react-bootstrap/Navbar';
export default function Menu() {
return (
<Navbar bg="light" expand="lg">
<Container>
<Navbar.Brand href="#home">Téléreview</Navbar.Brand>
<Navbar.Toggle aria-controls="basic-navbar-nav" />
<Navbar.Collapse id="basic-navbar-nav">
<Nav className="me-auto">
<Link href="/" passHref legacyBehavior><Nav.Link>Accueil</Nav.Link></Link>
<Link href="/stats" passHref legacyBehavior><Nav.Link>Statistiques</Nav.Link></Link>
<Link href="/avis" passHref legacyBehavior><Nav.Link>Avis</Nav.Link></Link>
</Nav>
</Navbar.Collapse>
</Container>
</Navbar>
)
}

View File

@@ -0,0 +1,34 @@
import React, { useRef } from 'react'
import { Bar } from 'react-chartjs-2'
import Chart from 'chart.js/auto'; //NE SURTOUT PAS SUPPRIMER CET IMPORT
export default function StatBarChart({labels, data}) {
return (
<Bar
options={{
redraw: true,
responsive: true,
interaction: {
intersect: false,
},
scales: {
x: {
stacked: true,
},
y: {
stacked: true
}
}
}}
data={{
labels: labels,
datasets: [
{
data: data,
backgroundColor: "#FF3B30",
},
]
}}
/>
)
}

View File

@@ -0,0 +1,3 @@
export const api = {
HOST: 'localhost:8080'
}

View File

@@ -0,0 +1,32 @@
import { useEffect, useState } from "react";
import { api } from "../config/reviewsApi";
function useReview(reviewId) {
const [review, setReview] = useState({});
const [loading, setLoading] = useState(true);
const [error, setError] = useState(false);
async function fetchData(id) {
const response = await fetch('http://' + api.HOST + `/borne/get_review?id=${id}`)
if (response.ok) {
const jsonData = await response.json();
setReview(jsonData);
setLoading(false);
setError(false);
} else {
setError(true);
setLoading(false);
}
}
useEffect(() => {
if (reviewId) {
fetchData(reviewId);
}
}, [reviewId])
return { review, loading, error }
}
export default useReview;

View File

@@ -0,0 +1,30 @@
import { useEffect, useState } from "react";
import { api } from "../config/reviewsApi";
export default function useReviews() {
const [reviews, setReviews] = useState([]);
const [loading, setLoading] = useState(true);
const [error, setError] = useState(false);
async function fetchLastReviews(limit=100) {
setLoading(true);
const response = await fetch('http://' + api.HOST + '/borne/get_last_reviews', {
method: 'GET'
})
if(response.ok) {
let json = await response.json()
setReviews(json);
setError(false);
setLoading(false);
}else {
setLoading(false);
setError(true);
}
}
useEffect(() => {
fetchLastReviews();
}, [])
return {reviews, error, loading, fetchLastReviews};
}

View File

@@ -0,0 +1,26 @@
import { useEffect, useState } from "react";
import { api } from "../config/reviewsApi";
export default function useStats(limit, interval) {
const [stats, setStats] = useState({});
const [loading, setLoading] = useState(true);
const [error, setError] = useState(false);
async function fetchData(limit, interval) {
const response = await fetch("http://" + api.HOST + `/borne/get_stats?interval=${interval}&limit=${limit}`)
if(response.ok) {
const data = await response.json();
setStats(data);
setError(false);
}else {
setError(true)
}
setLoading(false);
}
useEffect(() => {
fetchData(limit, interval);
}, [limit, interval])
return {stats, loading, error};
}

View File

@@ -0,0 +1,6 @@
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
}
module.exports = nextConfig

View File

@@ -0,0 +1,26 @@
{
"name": "interface-admin",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"export": "next export",
"lint": "next lint"
},
"dependencies": {
"@next/font": "13.1.6",
"bootstrap": "^5.2.3",
"chart.js": "^4.2.0",
"date-fns": "^2.29.3",
"eslint": "8.33.0",
"eslint-config-next": "13.1.6",
"next": "13.1.6",
"react": "18.2.0",
"react-bootstrap": "^2.7.0",
"react-chartjs-2": "^5.2.0",
"react-dom": "18.2.0",
"react-icons": "^4.7.1"
}
}

View File

@@ -0,0 +1,13 @@
import Menu from '../components/Menu'
import '../styles/globals.css'
import 'bootstrap/dist/css/bootstrap.css';
import { Container } from 'react-bootstrap';
export default function App({ Component, pageProps }) {
return <>
<Menu />
<Container fluid="md">
<Component {...pageProps} />
</Container>
</>
}

View File

@@ -0,0 +1,16 @@
import { Html, Head, Main, NextScript } from 'next/document'
export default function Document() {
return (
<Html lang="en">
<Head>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="icon" href="/favicon.ico" />
</Head>
<body>
<Main />
<NextScript />
</body>
</Html>
)
}

View File

@@ -0,0 +1,12 @@
import { useRouter } from 'next/router';
import React from 'react'
import Avis from '../../components/Avis';
import useReview from '../../hooks/review';
export default function AvisPage() {
const router = useRouter();
const {id} = router.query;
const {review, loading, error} = useReview(id);
return (
!loading && !error && <Avis review={review}/>
)
}

View File

@@ -0,0 +1,74 @@
import React, { useEffect, useState } from 'react'
import { Card, Container, Form, Row } from 'react-bootstrap'
import AvisList from '../../components/AvisList';
import useReviews from '../../hooks/reviews';
import styles from '../../styles/AvisListPage.module.css'
export default function AvisListPage() {
const [minGrade, setMinGrade] = useState(0);
const [maxGrade, setMaxGrade] = useState(10);
const [sources, setSources] = useState({'borne': true, 'website': true})
const [filteredReviews, setFilteredREviews] = useState([])
const {reviews, error, loading} = useReviews();
useEffect(() => {
const newReviews = reviews.filter((review) => review.note_principale >= minGrade && review.note_principale <= maxGrade && sources[review.nom_source])
setFilteredREviews(newReviews)
}, [reviews, minGrade, maxGrade, sources])
useEffect(() => {
if(minGrade > maxGrade) {
setMinGrade(maxGrade);
}
}, [maxGrade]);
useEffect(() => {
if(minGrade > maxGrade) {
setMaxGrade(minGrade);
}
}, [minGrade])
return (
<Container fluid>
<Card>
<Card.Header>Tous les avis</Card.Header>
<Card.Body>
<Row>
<Form>
<Form.Group>
<Form.Label>Types d'avis</Form.Label>
<Form.Check
type="switch"
label="Borne"
onChange={(e) => setSources({...sources, 'borne': e.target.checked})}
checked={sources['borne']}
/>
<Form.Check
type="switch"
label="QR Code"
onChange={(e) => setSources({...sources, 'website': e.target.checked})}
checked={sources['website']}
/>
</Form.Group>
<Form.Group>
<Form.Label>Note</Form.Label>
<div className='d-flex flex-row justify-content-around col-md-6'>
<div>Min : {minGrade}/10</div>
<div className={styles.sliderContainer}>
<input type="range" value={minGrade} onChange={(e) => setMinGrade(Number(e.target.value))} min="0" max="10" step="1" className={styles.slider}></input>
<input type="range" value={maxGrade} onChange={(e) => setMaxGrade(Number(e.target.value))} min="0" max="10" step="1" className={styles.slider}></input>
</div>
<div>Max : {maxGrade}/10</div>
</div>
</Form.Group>
</Form>
</Row>
<Row>
{!loading && !error && <AvisList avis={filteredReviews} />}
</Row>
</Card.Body>
</Card>
</Container >
)
}

View File

@@ -0,0 +1,128 @@
import Head from 'next/head'
import { Card, Container } from 'react-bootstrap'
import ComparativeBarChart from '../components/ComparativeBarChart'
import { useEffect, useState } from 'react'
import styles from "../styles/Home.module.css"
import useStats from '../hooks/stats'
import getDay from 'date-fns/getDay'
import getWeek from '../util'
export default function Home() {
const [datasets, setDatasets] = useState(null);
const [averages, setAverages] = useState(null);
const [differences, setDifferences] = useState(null);
useEffect(() => {
if (datasets) {
let newAverages = []
let newDifferences = []
for (let i = 0; i < datasets.length; i++) {
let currentEntriesCount = 0;
let previousEntriesCount = 0;
for (let x of datasets[i].current) {
if (x != null) {
currentEntriesCount++;
}
}
for (let x of datasets[i].previous) {
if (x != null) {
previousEntriesCount++;
}
}
if (currentEntriesCount != 0) {
newAverages[i] = datasets[i].current.reduce((a, b) => a + b) / currentEntriesCount;
if (previousEntriesCount > 0) {
newDifferences[i] = newAverages[i] - datasets[i].previous.reduce((a, b) => a + b) / datasets[i].previous.length
} else {
newDifferences[i] = newAverages[i]
}
} else {
newDifferences[i] = 0;
newAverages[i] = 0;
}
}
setAverages(newAverages);
setDifferences(newDifferences);
}
}, [datasets]);
const { stats, loading, error } = useStats(14, "jour");
useEffect(() => {
if (!error && !loading) {
let reviewCount = [null, null, null, null, null, null, null];
let reviewCountPrev = [null, null, null, null, null, null, null]
let reviewAvg = [null, null, null, null, null, null, null]
let reviewAvgPrev = [null, null, null, null, null, null, null]
for (let i = 0; i < stats.length; i++) {
let date = new Date(Date.parse(stats[i].date))
let now = new Date();
let day = (date.getDay() - 1) % 7;
let week = getWeek(date, 1);
let thisWeek = getWeek(now, 1);
if (week == thisWeek) {
reviewCount[day] = stats[i].nb_avis;
reviewAvg[day] = stats[i].moyenne_globale;
} else if (week = thisWeek - 1) {
reviewAvgPrev[day] = stats[i].moyenne_globale;
reviewCountPrev[day] = stats[i].nb_avis;
}
}
setDatasets([
{ title: "Nombre d'avis", current: reviewCount, previous: reviewCountPrev },
{ title: "Notes moyennes", current: reviewAvg, previous: reviewAvgPrev }
])
}
}, [stats]);
function dataVisualizer(title, current, previous, average, difference) {
return <div key={title}>
<h3>{title}</h3>
<Card className={styles.averageCard}>
<Card.Title>Moyenne</Card.Title>
<Card.Body className={styles.averageCardBody}>
<div
className={styles.averageMainValue}
>
{Math.round(average * 1e2) / 1e2}
</div>
<div
className={[styles.averageCardSecondaryValue, difference >= 0 ? styles.averagePositive : styles.averageNegative].join(' ')}
>
{(difference >= 0 ? "+" : "-") + Math.round(difference * 1e2) / 1e2}
</div>
</Card.Body>
</Card>
<ComparativeBarChart
xlabels={["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"]}
label0="Cette semaine"
label1="La semaine dernière"
data0={current}
data1={previous}
/>
<hr />
</div>
}
return (
<>
<Head>
<title>Telereview</title>
<meta name="description" content="Page d'accueil" />
</Head>
<Container fluid>
<Card>
<Card.Header as="h2">Vos performances cette semaine</Card.Header>
<Card.Body>
{datasets && averages && differences && datasets.map((set, i) => dataVisualizer(set.title, set.current, set.previous, averages[i], differences[i]))}
</Card.Body>
<div className='col col-12 col-lg-8 mx-auto'>
</div>
</Card>
</Container>
</>
)
}

View File

@@ -0,0 +1,68 @@
import React, { useEffect, useState } from 'react'
import { Card, Container, Form, Row } from 'react-bootstrap';
import StatBarChart from '../components/StatBarChart';
import useStats from '../hooks/stats';
export default function Stats() {
const [statName,setStatName] = useState("moyenne_globale")
const [timeInterval, setTimeInterval] = useState("jour")
const [chartReady, setChartReady] = useState(false);
const [xlabels, setXlabels] = useState([]);
const [plotData, setPlotData] = useState([]);
const {loading, error, stats} = useStats(10,timeInterval);
useEffect(() => {
if(!loading && !error) {
let newXlabels = [];
let newPlotData = [];
for(let i = 0; i < stats.length; i++) {
newXlabels.push(stats[i].date);
newPlotData.push(stats[i][statName]);
}
setXlabels(newXlabels);
setPlotData(newPlotData);
setChartReady(true);
}else {
setChartReady(false);
}
}, [stats, statName, timeInterval, loading, error])
return (
<Container fluid>
<Card>
<Card.Header>Tous les avis</Card.Header>
<Card.Body>
<Row>
<Form>
<Form.Group>
<Form.Label>Statistique</Form.Label>
<Form.Select value={statName} onChange={(e) => setStatName(e.target.value)}>
<option value="moyenne_globale">Moyenne globale</option>
<option value="nb_avis">Nombre d'avis</option>
<option value="moyenne_site">Moyenne du formulaire</option>
<option value = "moyenne_borne">Moyenne sur la borne</option>
<option value="dist_sexes">Distribution sexes</option>
</Form.Select>
</Form.Group>
<Form.Group>
<Form.Label>Periode</Form.Label>
<Form.Select value={timeInterval} onChange={(e) => setTimeInterval(e.target.value)}>
<option value="jour">Jour</option>
<option value="semaine">Semaine</option>
<option value="mois">Mois</option>
<option value = "annee">Année</option>
</Form.Select>
</Form.Group>
</Form>
</Row>
<Row>
{error && <p>Error</p>}
{chartReady && <StatBarChart data={plotData} labels={xlabels} />}
</Row>
</Card.Body>
</Card>
</Container>
)
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80"><path fill="#000" d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/><path fill="#000" d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/></svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="40" height="31" fill="none"><g opacity=".9"><path fill="url(#a)" d="M13 .4v29.3H7V6.3h-.2L0 10.5V5L7.2.4H13Z"/><path fill="url(#b)" d="M28.8 30.1c-2.2 0-4-.3-5.7-1-1.7-.8-3-1.8-4-3.1a7.7 7.7 0 0 1-1.4-4.6h6.2c0 .8.3 1.4.7 2 .4.5 1 .9 1.7 1.2.7.3 1.6.4 2.5.4 1 0 1.7-.2 2.5-.5.7-.3 1.3-.8 1.7-1.4.4-.6.6-1.2.6-2s-.2-1.5-.7-2.1c-.4-.6-1-1-1.8-1.4-.8-.4-1.8-.5-2.9-.5h-2.7v-4.6h2.7a6 6 0 0 0 2.5-.5 4 4 0 0 0 1.7-1.3c.4-.6.6-1.3.6-2a3.5 3.5 0 0 0-2-3.3 5.6 5.6 0 0 0-4.5 0 4 4 0 0 0-1.7 1.2c-.4.6-.6 1.2-.6 2h-6c0-1.7.6-3.2 1.5-4.5 1-1.3 2.2-2.3 3.8-3C25 .4 26.8 0 28.8 0s3.8.4 5.3 1.1c1.5.7 2.7 1.7 3.6 3a7.2 7.2 0 0 1 1.2 4.2c0 1.6-.5 3-1.5 4a7 7 0 0 1-4 2.2v.2c2.2.3 3.8 1 5 2.2a6.4 6.4 0 0 1 1.6 4.6c0 1.7-.5 3.1-1.4 4.4a9.7 9.7 0 0 1-4 3.1c-1.7.8-3.7 1.1-5.8 1.1Z"/></g><defs><linearGradient id="a" x1="20" x2="20" y1="0" y2="30.1" gradientUnits="userSpaceOnUse"><stop/><stop offset="1" stop-color="#3D3D3D"/></linearGradient><linearGradient id="b" x1="20" x2="20" y1="0" y2="30.1" gradientUnits="userSpaceOnUse"><stop/><stop offset="1" stop-color="#3D3D3D"/></linearGradient></defs></svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 283 64"><path fill="black" d="M141 16c-11 0-19 7-19 18s9 18 20 18c7 0 13-3 16-7l-7-5c-2 3-6 4-9 4-5 0-9-3-10-7h28v-3c0-11-8-18-19-18zm-9 15c1-4 4-7 9-7s8 3 9 7h-18zm117-15c-11 0-19 7-19 18s9 18 20 18c6 0 12-3 16-7l-8-5c-2 3-5 4-8 4-5 0-9-3-11-7h28l1-3c0-11-8-18-19-18zm-10 15c2-4 5-7 10-7s8 3 9 7h-19zm-39 3c0 6 4 10 10 10 4 0 7-2 9-5l8 5c-3 5-9 8-17 8-11 0-19-7-19-18s8-18 19-18c8 0 14 3 17 8l-8 5c-2-3-5-5-9-5-6 0-10 4-10 10zm83-29v46h-9V5h9zM37 0l37 64H0L37 0zm92 5-27 48L74 5h10l18 30 17-30h10zm59 12v10l-3-1c-6 0-10 4-10 10v15h-9V17h9v9c0-5 6-9 13-9z"/></svg>

After

Width:  |  Height:  |  Size: 629 B

View File

@@ -0,0 +1,5 @@
.personIcon {
width: 100%;
height: 100%;
/* font-size: 50px; */
}

View File

@@ -0,0 +1,6 @@
/* ==== TABLEAU ==== */
.row:hover {
cursor: pointer;
background-color: #EEE;
}

View File

@@ -0,0 +1,57 @@
/* ==== SLIDER ==== */
.sliderContainer {
position: relative;
width: 300px;
}
.sliderContainer > input[type=range]::-webkit-slider-thumb {
-webkit-appearance: none;
pointer-events: all;
width: 24px;
height: 24px;
background-color: #fff;
border-radius: 50%;
box-shadow: 0 0 0 1px #C6C6C6;
cursor: pointer;
z-index: 99;
}
.sliderContainer > input[type=range]::-moz-range-thumb {
z-index: 99;
pointer-events: all;
width: 24px;
height: 24px;
background-color: #fff;
border-radius: 50%;
box-shadow: 0 0 0 1px #C6C6C6;
cursor: pointer;
}
.sliderContainer > input[type=range]::-webkit-slider-thumb:hover {
background: #f7f7f7;
}
.sliderContainer >input[type=range]::-webkit-slider-thumb:active {
box-shadow: inset 0 0 3px #387bbe, 0 0 9px #387bbe;
-webkit-box-shadow: inset 0 0 3px #387bbe, 0 0 9px #387bbe;
}
.sliderContainer > input[type=range]::-moz-range-thumb:hover {
background: #f7f7f7;
}
.sliderContainer >input[type=range]::-moz-range-thumb:active {
box-shadow: inset 0 0 3px #387bbe, 0 0 9px #387bbe;
-webkit-box-shadow: inset 0 0 3px #387bbe, 0 0 9px #387bbe;
}
.sliderContainer >input[type="range"] {
-webkit-appearance: none;
appearance: none;
height: 2px;
width: 100%;
position: absolute;
background-color: #C6C6C6;
pointer-events: none;
}

View File

@@ -0,0 +1,29 @@
.averageCard {
width: min-content;
margin: 0 auto;
padding: 10px;
text-align: center;
min-width: 300px;
}
.averageCardBody {
display: flex;
align-items: center;
margin: 0 auto;
}
.averageMainValue {
font-size: 40px;
}
.averageSecondaryValue {
font-size: 20px;
}
.averagePositive {
color: green;
}
.averageNegative {
color: red;
}

View File

@@ -0,0 +1,34 @@
/**
* Returns the week number for this date. dowOffset is the day of week the week
* "starts" on for your locale - it can be from 0 to 6. If dowOffset is 1 (Monday),
* the week returned is the ISO 8601 week number.
* @param int dowOffset
* @return int
*/
export default function getWeek (date,dowOffset) {
/*getWeek() was developed by Nick Baicoianu at MeanFreePath: http://www.meanfreepath.com */
dowOffset = typeof(dowOffset) == 'number' ? dowOffset : 0; //default dowOffset to zero
var newYear = new Date(date.getFullYear(),0,1);
var day = newYear.getDay() - dowOffset; //the day of week the year begins on
day = (day >= 0 ? day : day + 7);
var daynum = Math.floor((date.getTime() - newYear.getTime() -
(date.getTimezoneOffset()-newYear.getTimezoneOffset())*60000)/86400000) + 1;
var weeknum;
//if the year starts before the middle of a week
if(day < 4) {
weeknum = Math.floor((daynum+day-1)/7) + 1;
if(weeknum > 52) {
nYear = new Date(date.getFullYear() + 1,0,1);
nday = nYear.getDay() - dowOffset;
nday = nday >= 0 ? nday : nday + 7;
/*if the next year starts before the middle of
the week, it is week #1 of that year*/
weeknum = nday < 4 ? 1 : 53;
}
}
else {
weeknum = Math.floor((daynum+day-1)/7);
}
return weeknum;
};

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,56 @@
* {
font-family: Arial, Helvetica, sans-serif;
}
html, body {
margin: 0;
height: 100%;
}
.page {
width: 100%;
height: 100%;
}
#camera > video, #camera > canvas {
position: absolute;
top: 0;
left: 0;
text-align: center;
margin-left: auto;
margin-right: auto;
left: 0;
right: 0;
}
#camera > video {
z-index: 0;
}
#camera > canvas {
z-index: 1;
}
.instructions {
width: max-content;
height: 300px;
margin: auto;
background: #A6CC00;
padding: 20px;
border-radius: 10px;
border: 3px #6B8000 solid;
position: absolute;
top: 0;
bottom: 0;
left: 0;
right: 0;
text-align: center;
}
.instructions > .title {
border-bottom: 3px #6B8000 solid;
}
.instructions > table, .instructions > th,.instructions > td {
border: 1px solid #6B8000;
border-collapse: collapse;
}

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 202 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

View File

@@ -0,0 +1,22 @@
class AudioPage {
constructor() {
this.isEnabled = false;
this.DOMElement = document.getElementById("audio");
}
set enabled(isEnabled) {
this.isEnabled = isEnabled;
this.DOMElement.style.display = isEnabled ? "block" : "none";
document.getElementById("grade").innerHTML = "";
}
setGrade(grade) {
if(this.isEnabled) {
document.getElementById("grade").innerHTML = grade.toString();
}
}
reset() {
document.getElementById("grade").innerHTML = "";
}
}

View File

@@ -0,0 +1,153 @@
class CameraPage {
constructor() {
this.spinnerWeight = 10;
this.spinnerColor = "#0F0FFF"
this.canvas = document.getElementById("overlay-canvas");
this.ctx = this.canvas.getContext("2d");
this.video = document.getElementById("camera-video");
this.width;
this.height; //calcule automatiquement en fonction de la largeur du flux vidéo
this.videoWidth;
this.videoHeight;
this.streaming = false;
this.activeEffects = [];
this.images = {};
this._startup();
this._loadImages();
this._enabled = false;
this.DOMElement = document.getElementById("camera");
}
set enabled(val) {
this._enabled = val;
this.DOMElement.style.display = val ? "block" : "none";
if (val) {
this._frame();
this.video.play();
}else {
this.video.pause();
}
}
get enabled() {
return this._enabled;
}
_startup() {
navigator.mediaDevices
.getUserMedia({ video: true, audio: false })
.then((stream) => {
this.video.srcObject = stream;
this.video.play();
})
.catch((err) => {
console.error(`Erreur pendant la lecture de la camera: ${err}`);
});
this.video.addEventListener(
"canplay",
(ev) => {
if (!this.streaming) {
//calcul de la taille de la vidéo en fonction de la taille de la fenêtre pour qu'elle soit toujours visible
let aspectRatio = this.video.videoWidth / this.video.videoHeight;
if (window.innerHeight * aspectRatio > window.innerWidth) {
this.width = window.innerWidth;
this.height = window.innerWidth / aspectRatio;
} else {
this.width = window.innerHeight * aspectRatio;
this.height = window.innerHeight;
}
this.videoHeight = this.video.videoHeight;
this.videoWidth = this.video.videoWidth;
this.video.setAttribute("width", this.width);
this.video.setAttribute("height", this.height);
this.canvas.setAttribute("width", this.width);
this.canvas.setAttribute("height", this.height);
this.streaming = true;
}
},
false
);
}
_loadImages() {
this.images.thumbsUp = new Image();
this.images.thumbsUp.src = "assets/img/thumbs_up.png";
this.images.thumbsDown = new Image();
this.images.thumbsDown.src = "assets/img/thumbs_down.png";
}
_frame() {
if (this.streaming && this.enabled && this.width && this.height) {
this.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height);
this._drawEffects();
}
if (this.enabled) {
requestAnimationFrame(() => this._frame());
}
}
_scaleEffect(x, y, width, height) {
let xScale = this.width / this.videoWidth;
let yScale = this.height / this.videoHeight;
return {
x: x * xScale,
y: y * yScale,
width: width * xScale,
height: height * yScale
}
}
_drawEffects() {
for (let effect of this.activeEffects) {
let { x, y, width, height } = this._scaleEffect(effect.x, effect.y, effect.width, effect.height);
width = width * this.videoWidth * 2;
height = height * this.videoHeight * 2;
x = x * this.videoWidth - width / 2;
y = y * this.videoHeight - height / 2;
console.log(width, height);
if (effect.type == "thumbs_down") {
this._drawThumbsDown(x, y, width, height);
}
if (effect.type == "thumbs_up") {
this._drawThumbsUp(x, y, width, height);
}
if (effect.type == "loading") {
this._drawLoading(x, y, width, effect.progress);
}
}
}
_drawLoading(x, y, radius, progress) {
this.ctx.lineWidth = this.spinnerWeight;
this.ctx.strokeStyle = this.spinnerColor;
this.ctx.beginPath();
this.ctx.arc(x, y, radius, 0, progress * 2 * Math.PI);
this.ctx.stroke();
}
_drawThumbsDown(x, y, width, height) {
this.ctx.beginPath();
this.ctx.drawImage(this.images.thumbsDown, x, y, width, height);
this.ctx.stroke();
}
_drawThumbsUp(x, y, width, height) {
this.ctx.beginPath();
this.ctx.drawImage(this.images.thumbsUp, x, y, width, height);
this.ctx.stroke();
}
setEffects(effects) {
this.activeEffects = effects;
}
reset() {
this.activeEffects = [];
}
}

View File

@@ -0,0 +1,5 @@
let stateManager;
window.addEventListener("load", () => {
stateManager = new StateManager();
}, false);

View File

@@ -0,0 +1,22 @@
class WebsocketClient {
constructor(onNewEffects, onNewState, onNewGrade, onReset) {
this.socket = new WebSocket("ws://localhost:5000");
this.socket.addEventListener("open", (event) => {
this.socket.send("connected");
console.log("connected")
});
this.socket.onmessage = (event) => {
let msg = JSON.parse(event.data);
if (msg.type == "effects") {
onNewEffects(msg.effects);
}else if(msg.type == "state") {
onNewState(msg.state);
}else if(msg.type == "new_grade") {
onNewGrade(Number(msg.grade));
}else if(msg.type == "reset") {
onReset();
}
};
}
}

View File

@@ -0,0 +1,12 @@
class SleepingPage {
constructor(onWakeUp) {
this.onWakeUp = onWakeUp;
this.isEnabled = false;
this.DOMElement = document.getElementById("sleeping-page");
}
set enabled(isEnabled) {
this.isEnabled = isEnabled;
this.DOMElement.style.display = isEnabled ? "block" : "none";
}
}

View File

@@ -0,0 +1,63 @@
const STATE = {
sleeping: 0,
video: 1,
audio: 2,
thankYou: 3,
};
class StateManager {
constructor() {
this._state = STATE.sleeping;
this._cameraPage = new CameraPage();
this._sleepingPage = new SleepingPage();
this._audioPage = new AudioPage();
this._thankYouPage = new ThankYouPage();
this.wsClient = new WebsocketClient(
(effects) => {
this.setState(STATE.video);
this._cameraPage.setEffects(effects)
},
(state) => this.setState(state),
(grade) => this._audioPage.setGrade(grade),
() => this.reset(),
);
this._sleepingPage.enabled = true;
this._cameraPage.enabled = false;
this._audioPage.enabled = false;
this._thankYouPage.enabled = false;
}
setState(newState) {
console.log({current:this._state,new:newState})
if(this._state == STATE.sleeping && newState == STATE.video) {
this._cameraPage.enabled = true;
this._sleepingPage.enabled = false;
this._state = newState;
}else if(this._state == STATE.video && newState == STATE.audio) {
this._cameraPage.enabled = false;
this._audioPage.enabled = true;
this._state = newState;
}else if(this._state == STATE.audio && newState == STATE.thankYou) {
this._audioPage.enabled = false;
this._thankYouPage.enabled = true;
this._state = newState;
}else if(this._state == STATE.thankYou && newState == STATE.sleeping) {
this._thankYouPage.enabled = false;
this._sleepingPage.enabled = true;
this._state = newState;
}
}
reset() {
this._state = 0;
this._cameraPage.enabled = false;
this._audioPage.enabled = false;
this._thankYouPage.enabled = false;
this._audioPage.reset();
this._cameraPage.reset();
this._sleepingPage.enabled = true;
}
}

View File

@@ -0,0 +1,11 @@
class ThankYouPage {
constructor() {
this.isEnabled = false;
this.DOMElement = document.getElementById("thank-you");
}
set enabled(isEnabled) {
this.isEnabled = isEnabled;
this.DOMElement.style.display = isEnabled ? "block" : "none";
}
}

View File

@@ -0,0 +1,68 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="assets/css/main.css">
<!-- <link rel="stylesheet" href="assets/css/bootstrap-grid.min.css"> -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Roboto&display=swap" rel="stylesheet">
<title>Téléreview</title>
</head>
<body>
<div id="sleeping-page" class="page">
<div class="instructions">
<div class="title">
<h1>Votre avis nous intéresse</h1>
</div>
<span>Faites un</span>
<img width=50 src="assets/img/thumbs_up.png">
<span>ou un</span>
<img width=50 src="assets/img/thumbs_down.png">
<span> avec votre main pour commencer</span>
</div>
</div>
<div id="camera">
<canvas id="overlay-canvas"></canvas>
<video id="camera-video"></video>
</div>
<div id="audio">
<div class="instructions">
<div class="title">
<h1>Dites-nous en plus</h1>
</div>
<p>Donnez une note sur 10 au critère suivant</p>
<table>
<tr>
<th>Critère</td>
<th>Note / 10</td>
</tr>
<tr>
<td>Calme</td>
<td><span id="grade"></span>/10</td>
</tr>
</table>
</div>
</div>
<div id="thank-you">
<div class="instructions">
<div class="title">
<h1>Merci pour votre avis</h1>
</div>
<span>Nous esperons vous revoir bientôt</span>
</div>
</div>
<script src="assets/js/camera_page.js"></script>
<script src="assets/js/network.js"></script>
<script src="assets/js/thank_you_page.js"></script>
<script src="assets/js/audio_page.js"></script>
<script src="assets/js/sleeping_page.js"></script>
<script src="assets/js/state_manager.js"></script>
<script src="assets/js/main.js"></script>
</body>
</html>

View File

@@ -0,0 +1,24 @@
**/.classpath
**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/charts
**/docker-compose*
**/compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
LICENSE
README.md

View File

@@ -0,0 +1,15 @@
FROM node:lts-alpine
WORKDIR /usr/src/app
#installation des dépendances
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
#On copie le code source
COPY . .
#On change le propriétaire du dossier
RUN chown -R node /usr/src/app
#On change l'utilisateur
USER node
CMD ["node", "index.js"]

View File

@@ -1,4 +1,7 @@
# Installation
# Serveur de traitement des données
Ce serveur s'occupe de fournir une API web pour pourvoir ajouter des avis sur la borne, réxupérer des avis, calculer et fournir les statistiques sur ces avis
# Installation (Si vous voulez le lancer hors du container docker)
* Pour faire fonctinoner le serveur sur vos machines il y a 3 choses a faire
1. Installer node js : https://nodejs.org/en/download/
2. Ouvrir un terminal et aller dans ce dossier (code/server) et tapper `npm install` pour installer les pacakges nécessaires
@@ -13,6 +16,7 @@
- `/borne/get_criteres` : renvoie les criteres de notations valide pour les notes autres
- `/borne/notes_autres?critere=CRIT&limit=LIM` : renvoie les LIM dernières notes sur le critère CRIT
- `/borne/notes_autres?id=ID&limit=LIM` : renvoie toutes les notes spécifiques liées à l'avis ID
- `get_stats?interval=INTERVAL&limit=LIM`: interval est "jour", "mois", "annee", "semaine", l'interval de calcul de données demandé, LIM est le nombre des stats à récupérer
### Routes POST
- `/add_review` : Ajoute une review et un auteur, paramètres POST :

View File

@@ -29,7 +29,12 @@ const getLastReviews = (limit=10) => {
*/
const getReviewFromId = (id) => {
return new Promise((resolve, reject) => {
let sql = `SELECT * FROM borne_avis WHERE id = ? LIMIT 1`;
let sql = `SELECT borne_avis.id,date,note_principale,commentaire,sources.nom as nom_source, borne_auteurs.sexe as sexe_auteur, borne_auteurs.age as age_auteur
FROM borne_avis
JOIN sources ON sources.id = source_id
JOIN borne_auteurs ON borne_auteurs.id = id_auteur
WHERE borne_avis.id = ?
LIMIT 1`;
conn.query(sql, [id], (err, res) => {
if (err) {
reject(err);
@@ -138,7 +143,16 @@ const getStats = (interval, limit=10) => {
Ces fonction sont des handlers pour les routes express, elles sont appelées par les routes et renvoient les données au format JSON
*/
export const handleGetLastReviews = (req, res) => {
getLastReviews(req.query.limit)
if (req.query.limit) {
getLastReviews(Number(req.query.limit))
.then((reviews) => {
res.send(reviews);
})
.catch((err) => {
res.status(500).send("Error: " + err.message);
});
} else {
getLastReviews()
.then((reviews) => {
res.send(reviews);
})
@@ -146,11 +160,14 @@ export const handleGetLastReviews = (req, res) => {
res.status(500).send("Error: " + err.message);
});
}
}
export const handleGetReview = (req, res) => {
getReviewFromId(req.query.id)
.then((review) => {
res.send(review);
getNotesAutresFromReview(req.query.id).then((notesAutres) => {
res.send({ ...review, notes_autres: notesAutres });
})
})
.catch((err) => {
res.status(500).send("Error: " + err.message);
@@ -169,13 +186,23 @@ export const handleGetCriteres = (req, res) => {
export const handleGetNotesAutres = (req, res) => {
if (req.query.critere) {
getNotesAutresFromCritere(req.query.critere, req.query.limit)
if (req.query.limit) {
getNotesAutresFromCritere(req.query.critere, Number(req.query.limit))
.then((notes) => {
res.send(notes);
})
.catch((err) => {
res.status(500).send("Error: " + err.message);
});
} else {
getNotesAutresFromCritere(req.query.critere)
.then((notes) => {
res.send(notes);
})
.catch((err) => {
res.status(500).send("Error: " + err.message);
});
}
} else if (req.query.id) {
getNotesAutresFromReview(req.query.id)
.then((notes) => {
@@ -190,7 +217,16 @@ export const handleGetNotesAutres = (req, res) => {
}
export const handleGetStats = (req, res) => {
getStats(req.query.interval, req.query.limit)
if (req.query.limit) {
getStats(req.query.interval, Number(req.query.limit))
.then((stats) => {
res.send(stats);
})
.catch((err) => {
res.status(500).send("Error: " + err.message);
});
} else {
getStats(req.query.interval)
.then((stats) => {
res.send(stats);
})
@@ -198,3 +234,4 @@ export const handleGetStats = (req, res) => {
res.status(500).send("Error: " + err.message);
});
}
}

View File

@@ -3,12 +3,13 @@ import express from 'express';
import bodyParser from 'body-parser';
import { addReviewFromRequest } from './borne/post_handler.js';
import { addSocialReviewFromRequest } from './reseaux_sociaux/post_handler.js';
import { startCronJobs } from './stats/update_stats.js';
import { startCronJobs, manualUpdateStats } from './stats/update_stats.js';
import * as borneHandler from './borne/get_handler.js';
import cors from "cors";
const app = express();
app.use(bodyParser.urlencoded({extended:true}))
app.use(cors({origin:'*'}))
dotenv.config()
app.post('/add_review', (req,res) => addReviewFromRequest(req,res));
app.post('/add_social_review', (req,res) => addSocialReviewFromRequest(req,res));
@@ -19,6 +20,10 @@ app.get('/borne/get_criteres', borneHandler.handleGetCriteres);
app.get('/borne/notes_autres', borneHandler.handleGetNotesAutres);
app.get('/borne/get_stats', borneHandler.handleGetStats);
app.get('/update_stats', (req, res) => {
manualUpdateStats();
res.send("OK");
})
startCronJobs();

View File

@@ -10,6 +10,7 @@
"license": "ISC",
"dependencies": {
"body-parser": "^1.20.1",
"cors": "^2.8.5",
"cron": "^2.1.0",
"dotenv": "^16.0.3",
"express": "^4.18.2",
@@ -108,6 +109,18 @@
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ=="
},
"node_modules/cors": {
"version": "2.8.5",
"resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz",
"integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==",
"dependencies": {
"object-assign": "^4",
"vary": "^1"
},
"engines": {
"node": ">= 0.10"
}
},
"node_modules/cron": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/cron/-/cron-2.1.0.tgz",
@@ -491,6 +504,14 @@
"node": ">= 0.6"
}
},
"node_modules/object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
"integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/object-inspect": {
"version": "1.12.2",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.12.2.tgz",
@@ -802,6 +823,15 @@
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ=="
},
"cors": {
"version": "2.8.5",
"resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz",
"integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==",
"requires": {
"object-assign": "^4",
"vary": "^1"
}
},
"cron": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/cron/-/cron-2.1.0.tgz",
@@ -1101,6 +1131,11 @@
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz",
"integrity": "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg=="
},
"object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
"integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg=="
},
"object-inspect": {
"version": "1.12.2",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.12.2.tgz",

View File

@@ -11,6 +11,7 @@
"license": "ISC",
"dependencies": {
"body-parser": "^1.20.1",
"cors": "^2.8.5",
"cron": "^2.1.0",
"dotenv": "^16.0.3",
"express": "^4.18.2",

View File

@@ -8,6 +8,10 @@ SET @date_limite = DATE_ADD(NOW(), INTERVAL -DAY_COUNT_DELAY DAY);
On récupère les notes notes moyennes sur la periode, en séparant global, borne et site
*/
SELECT @nb_avis:=COUNT(*)
FROM borne_avis
WHERE borne_avis.date > @date_limite;
SELECT @moyenne_globale:=AVG(note_principale)
FROM borne_avis
WHERE borne_avis.date > @date_limite;
@@ -38,7 +42,7 @@ SELECT @stats_a:=COUNT(*) FROM borne_avis
SET @dist_sexe = CONCAT(@stats_f,",",@stats_h,",",@stats_a);
INSERT INTO STATS_GENERAL_TABLE_NAME (moyenne_globale, moyenne_borne, moyenne_site, dist_sexe) VALUES (@moyenne_globale, @moyenne_borne, @moyenne_site, @dist_sexe);
INSERT INTO STATS_GENERAL_TABLE_NAME (moyenne_globale, nb_avis, moyenne_borne, moyenne_site, dist_sexe) VALUES (@moyenne_globale, @nb_avis, @moyenne_borne, @moyenne_site, @dist_sexe);
INSERT INTO STATS_AUTRES_TABLE_NAME (critere_id, note)
SELECT critere_id, AVG(note) as moyenne FROM borne_notes_autre

View File

@@ -67,3 +67,10 @@ export const startCronJobs = () => {
)
console.log("All cronjobs initiated")
}
export function manualUpdateStats() {
computeStats(1, "stats_general_jour", "stats_autres_jour");
computeStats(7, "stats_general_semaine", "stats_autres_jour");
computeStats(30, "stats_general_mois", "stats_autres_mois");
computeStats(365, "stats_general_annee", "stats_autres_annee");
}

View File

@@ -1,29 +0,0 @@
import requests
#Exemple ajout d'un commentaire depuis la borne (site ou geste)
avis = {
"note": 8,
"source": "borne",
#Optionel
"auteur_age": 20,
"notes_autre": '{"proprete":8,"calme":10}',
"auteur_sexe": 'f',
"commentaire": "Commentaire"
}
# res = requests.post("http://localhost:8080/add_review", data=avis)
# print(res.text)
#Exemple ajout d'un commentaire trouvé sur les réseaux sociaux
avis = {
"auteur_nom": "michel",
"source": "instagram",
"note": 8,
"date": "2022-12-24",
#Optionel
"commentaire": "J'ai beaucoup aimé !",
"lien": "https://instagram.com/si_insta_avait_des_liens_vers_des_commentaires_faudrait_le_mettre_ici",
"auteur_lien": "https://instagram.com/michel",
}
res = requests.post("http://localhost:8080/add_social_review", data=avis)
print(res.text)

2
code/setup.sh Executable file
View File

@@ -0,0 +1,2 @@
#!/bin/sh
sudo modprobe v4l2loopback devices=2

View File

@@ -0,0 +1,3 @@
FROM alpine:latest
RUN apk add --no-cache ffmpeg
CMD ["ffmpeg","-video_size","640x480","-f","video4linux2","-i","/dev/video0","-codec","copy","-f","v4l2","/dev/video1","-codec","copy","-f","v4l2","/dev/video2", "-loglevel","debug"]

3
docs/charte_graphique.md Normal file
View File

@@ -0,0 +1,3 @@
# Codes hex couleurs
#6B8000
#A6CC00

12
docs/dupliquer_camera.md Normal file
View File

@@ -0,0 +1,12 @@
# Methode pour accéder au flux video a la fois depuis firefox et depuis opencv
* installer v4l2loopback :
* Télécharger : `git clone https://github.com/umlaeute/v4l2loopback.git`
* Installer avec : `make` puis `sudo make install`
* activer le module : `sudo modprobe v4l2loopback devices=2`
* Erreur possible : opperation not permitted : il faut désactiver secure boot
* OU `apt update && apt install v4l2loopback-dkms v4l2loopback-utils`
* [Innutile si container video_loopback present] Faire looper la camera /dev/video0 sur les autres
* installer ffmpeg : `sudo apt get install ffmpeg`
* activer le loopback : `ffmpeg -video_size 640x480 -f video4linux2 -i /dev/video0 -codec copy -f v4l2 /dev/video1 -codec copy -f v4l2 /dev/video2`
Maintenant on peut par exemple utiliser /dev/video2 sur firefox et /dev/video13sur opencv sans que cela ne pose de problème

16
docs/liste_pages_web.md Normal file
View File

@@ -0,0 +1,16 @@
Liste des pages web à concevoir dans tout le projet
* Interface de la borne
* Interface avec retour vidéo et effets quand un gest est détécté
* page de remerciment, avec indication que la personne peut parler pour développer son avis
* Page de remerciment
* Formulaire de retour d'avis
* Formulaire
* Page de remerciement
* Interface admin
* Page principale avec les stats principales
* Page liste des avis récents
* page stats détaillées sur la borne ou on peut changer l'intervalle de calcul des stats
* Page avec les derniers avis récup sur les réseaux sociaux
* Page avec les stats récup sur les réseaux sociaux

View File

@@ -0,0 +1,6 @@
Truc a corriger sur le rapport
* XRemplacer les screen par des vrai graphiques
* XAjouter des standard deviation aux graphiques
* XPeut être ajouter des valeurs absolues sur les benchmark
* X Regarder si utiliser un power saving mode sur le CPU est intéressant (température vs perofrmances)
* X Dire que c'est pas du temps mais que ça nous convient

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

@@ -0,0 +1,5 @@
FROM alpine:latest
RUN apk add --no-cache sysbench
WORKDIR /app
COPY benchmark_script.sh /app/benchmark_script.sh
CMD ["sh","benchmark_script.sh"]

View File

@@ -0,0 +1,5 @@
sysbench --test=cpu run >>sysbench.log
sysbench --test=memory run >>sysbench.log
sysbench --test=fileio --file-test-mode=rndrw prepare
sysbench --test=fileio --file-test-mode=rndrw run >>sysbench.log
sysbench --test=fileio cleanup

View File

@@ -0,0 +1,114 @@
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Prime numbers limit: 10000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 613.74
General statistics:
total time: 10.0013s
total number of events: 6140
Latency (ms):
min: 1.62
avg: 1.63
max: 2.10
95th percentile: 1.64
sum: 9998.37
Threads fairness:
events (avg/stddev): 6140.0000/0.00
execution time (avg/stddev): 9.9984/0.00
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Running memory speed test with the following options:
block size: 1KiB
total size: 102400MiB
operation: write
scope: global
Initializing worker threads...
Threads started!
Total operations: 33390517 (3338076.27 per second)
32607.93 MiB transferred (3259.84 MiB/sec)
General statistics:
total time: 10.0000s
total number of events: 33390517
Latency (ms):
min: 0.00
avg: 0.00
max: 0.10
95th percentile: 0.00
sum: 4949.30
Threads fairness:
events (avg/stddev): 33390517.0000/0.00
execution time (avg/stddev): 4.9493/0.00
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 16MiB each
2GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 789.46
writes/s: 526.31
fsyncs/s: 1691.85
Throughput:
read, MiB/s: 12.34
written, MiB/s: 8.22
General statistics:
total time: 10.0292s
total number of events: 30045
Latency (ms):
min: 0.00
avg: 0.33
max: 17.49
95th percentile: 2.48
sum: 9966.50
Threads fairness:
events (avg/stddev): 30045.0000/0.00
execution time (avg/stddev): 9.9665/0.00

View File

@@ -0,0 +1,114 @@
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Prime numbers limit: 10000
Initializing worker threads...
Threads started!
CPU speed:
events per second: 552.76
General statistics:
total time: 10.0015s
total number of events: 5530
Latency (ms):
min: 1.79
avg: 1.81
max: 2.34
95th percentile: 1.82
sum: 9999.03
Threads fairness:
events (avg/stddev): 5530.0000/0.00
execution time (avg/stddev): 9.9990/0.00
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Running memory speed test with the following options:
block size: 1KiB
total size: 102400MiB
operation: write
scope: global
Initializing worker threads...
Threads started!
Total operations: 28620880 (2861268.15 per second)
27950.08 MiB transferred (2794.21 MiB/sec)
General statistics:
total time: 10.0001s
total number of events: 28620880
Latency (ms):
min: 0.00
avg: 0.00
max: 0.10
95th percentile: 0.00
sum: 5501.03
Threads fairness:
events (avg/stddev): 28620880.0000/0.00
execution time (avg/stddev): 5.5010/0.00
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 16MiB each
2GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 785.13
writes/s: 523.42
fsyncs/s: 1687.54
Throughput:
read, MiB/s: 12.27
written, MiB/s: 8.18
General statistics:
total time: 10.0083s
total number of events: 29866
Latency (ms):
min: 0.00
avg: 0.33
max: 18.46
95th percentile: 2.48
sum: 9963.06
Threads fairness:
events (avg/stddev): 29866.0000/0.00
execution time (avg/stddev): 9.9631/0.00

View File

@@ -0,0 +1,8 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ff5ebc88a1f code-backend_reconaissance "python main.py" 3 hours ago Up 2 hours 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp backend_reconaissance
02e8569d1926 code-video_loopback "ffmpeg -video_size …" 3 hours ago Up 2 hours video_loopback
d50295f99ae5 phpmyadmin:latest "/docker-entrypoint.…" 4 hours ago Up 2 hours 0.0.0.0:8000->80/tcp, :::8000->80/tcp phpmyadmin
a5efc4ddae1b code-reviews_api "docker-entrypoint.s…" 4 hours ago Up 2 hours 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp reviews_api
830786df4f5f httpd:latest "httpd-foreground" 4 hours ago Up 2 hours 0.0.0.0:8888->80/tcp, :::8888->80/tcp interface_borne
2fd04a8fe768 mysql:latest "docker-entrypoint.s…" 4 hours ago Up 2 hours (healthy) 3306/tcp, 33060/tcp db
9999c72bb59f httpd:latest "httpd-foreground" 4 hours ago Up 2 hours 0.0.0.0:80->80/tcp, :::80->80/tcp interface_admin

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -0,0 +1,111 @@
# Rapport PAN2
## Choix du materiel
* **Ecran** (InnoLux N133HSE): Pour l'écran nous avons choisi un écran 13.3" (non tactile), nous avons estimé la taille suffisante pour un confort de lecture à distance raisonable de la borne. De plus cet écran présente un système de montage à vis qui facilitera son intégration dans la borne.
* **Camera** : Nous avions le choix entre 2 camera
* Une webcam Logitech c525
* Grand choix de résolutions avec un framerate élevé par exemple 640x480@30fps, 960x720@30fps, 1920x1080@5fps
* Mais champs de vision beaucoup trop faible pour notre application
* Une camera de surveillance grand angle de la marque ELP
* Choix de résoltions restreint, mais utilisable dans des bonnes conditions de luminosité
<img src="img/formats_camera.png" height=250>
* Champs de vision adapté a notre utilisation
* Cette camera apporte des distortions importantes sur les bords, cela compléxifie la reconnaissance d'image.
* Nous avons choisi la deuxième camera pour les raisons citées ci-dessus
* **Ordinateur** :
Nous avons à notre disposition un AMD Cubi doté de 4Go de RAM, 128Go de SSD et un Intel Core i3 5005U. L'objectif du reste de ce document est d'évaluer si les performances de cet ordinateur sont suffisantes pour notre application.
## Choix du système d'exploitation
Nous avons choisi d'installer debian 11, une distribution de linux légère avec laquelle nous étions familiers. Pour l'environement de bureau nous avons choisi LXDE là aussi pour le minimalisme qu'il présente au vu des performances de la machine. Nous avons par la suite désinstallé tout les packets installé par défaut par LXDE dont nous n'avions pas besoin. (Par exemple les jeux, la calculatrice, etc.)
## Benchmarking
Pour évaluer la capacité du materiel à supporter la charge de notre application, nous avons executé en parallèle les applications qui seront utilisés pour le produit final ou une application aux besoins équivalents quand cela n'était pas possible. De ce fait ce benchmark n'est pas exact mais nous donnera un ordre d'idée sur les besoins de notre projet.
### Liste des modules à executer
Pour ce faire nous avons mis en place un container Docker par module de notre application :
* **db** : un serveur mysql basique pour la base de donnée
* **phpmyadmin** : l'interface phpmyadmin pour gérer la base de donnée. Ce container n'est pas absolument nécessaire au produit final mais permet d'administrer facilement la base de donnée
* **review_api** : un serveur express pour servir l'api permettant de récupérer et ajouter des avis dans la base de donnée, ainsi que de calculer les statistiques.
* **interface_borne**: un serveur apache2 permettant de servir l'interface graphique de la borne.
* **interface_admin**: idem pour l'interface graphique, ce serveur pourait être fusioné avec
**interface_borne** mais dans le cadre de ce benchmarking on les gardera séparés.
* **backend_reconnaissance**: Ce container s'occupera de la reconnaissance audio et video de la borne. Cependant ces deux processus ne seront jamais actifs en même temps, de plus la reconnaissance d'image sera la plus couteuse en terme de puissance de calcul. C'est pour cela qu'ici nous avons uniquement utilisé mediapipe hands (bibliothèque python de reconaissance de mains) avec une implémentation de la communication avec l'interface de la borne comme programme équivalent.
* **video_loopback** : ce container sert à contourner un problème que nous avons rencontré avec la gestion des camera par linux. En effet seul un programme peut accéder au flux d'une camera à la fois. C'est pour cela que nous avons utilisé `v4l2loopback` avec `ffmpeg` pour dupliquer le flux de notre camera dans 2 cameras virtuelles.
En parallèle firefox est ouvert pour afficher l'interface graphique de la borne.
### Résultats
En réglant la camera a 640x480@30fps, aucune perte d'image n'est observée dans le retour vidéo dans firefox.
Nous avons observé la capacité de Mediapipe l'image de la camera et de la communiquer à l'interface web qui affiche le résultat.
Nous avons obtenu les résultats suivants pour 2 modes de gestion de fréquence du processeur (la gestions automatique de base schedutil et performance qui utillise la fréquence maximale)
| CPU Scheduler | FPS moyens | Ecart type |
|--------------|------------|------------|
| schedutil |10.4 | 1.4 |
| performance | 10.1 | 0.9 |
Ce taux de rafraichissement est suffisant pour que l'application paraisse relativement réactive à l'utilisateur, bien que cela ne soit pas du temps réel.
Pendant ce temps l'api de traitement des avis et la base de donnée fonctionnent correctement en affichant une lattence de 8ms pour une récupération de la liste d'avis.
Pendance ce temps l'utilisation du processeur qui varie de 250% à 280% (sur 400% pour les 4 coeurs) et une utilisation de la RAM de 50% (1.96Go) ce qui nous laisse de la marge en cas d'ajout imprévu.
Les processus utilisant le plus de CPU sont la reconaissance d'image (70%) et firefox pour afficher l'interface de la borne (70-80%). En cas de besoin ces valeurs pourront être diminuées au prix de la fluidité du retour vidéo.
Pour la RAM c'est le serveur mysql (10%) et firefox (10%) qui consomment le plus.
Pour ce qui est de la température, comme la borne sera dans un environement fermé, il était important de tester le bon fonctionnement du materiel dans ces conditions. Nous avons laissé tourner l'application pendant 2h dans une boite en carton fermée. Au début du test la température du processeur était de 50°C, au bout de 2h la température était montée a 70°C, ce qui reste assez faible pour ne pas limiter les performances du CPU.
```
$ sensors
acpitz-acpi-0
Adapter: ACPI interface
temp1: +27.8°C (crit = +110.0°C)
temp2: +29.8°C (crit = +110.0°C)
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +69.0°C (high = +105.0°C, crit = +105.0°C)
Core 0: +69.0°C (high = +105.0°C, crit = +105.0°C)
Core 1: +68.0°C (high = +105.0°C, crit = +105.0°C)
```
<div style="page-break-after: always;"></div>
## Impact de l'utilisation de Docker
Nous nous sommes également posé la question de l'impact de l'utilisation de docker dans les performances de notre projet. Pour mesurer cela, nous avons effectué des benchmark directement sur le système puis dans un container Docker pour mesurer la différence. Nous avons utlisé sysbench pour évaluer les performances du CPU, de la RAM et du disque (écriture/lecture aléatoire).
Le script permettant de faire le benchmark
```sh
sysbench --test=cpu run >>sysbench.log
sysbench --test=memory run >>sysbench.log
sysbench --test=fileio --file-test-mode=rndrw prepare
sysbench --test=fileio --file-test-mode=rndrw run >>sysbench.log
sysbench --test=fileio cleanup
```
Le Dockerfile du container dans lequel nous avons exectué le même script
```Dockerfile
FROM alpine:latest
RUN apk add --no-cache sysbench
WORKDIR /app
COPY benchmark_script.sh /app/benchmark_script.sh
CMD ["sh","benchmark_script.sh"]
```
Les résultats de ce test on permis de conclure que l'impact de docker était négligeable.
<img src="img/benchmark.png" width=400>
| Type | CPU (Evts/s) | RAM (Mbi/s) | Disque lecture(Mbi/s) | Disque écriture (Mbi/s) |
|--------------|------------|------------|---|---|
| Normal | 613.14 | 3259.84 | 12.34 | 8.22 |
| Docker | 552.76 | 2794.21 |12.27 | 8.18 |
## Conclusion
Au vu des tests effectués le materiel dont nous disposons semble adapté à notre projet. Il serait cependant possible de réduire l'utilisation faite de l'ordinateur embarqué dans la borne en déplaçant la partie stockage et traitement des avis sur un autre serveur.

Binary file not shown.

View File

@@ -0,0 +1,11 @@
acpitz-acpi-0
Adapter: ACPI interface
temp1: +27.8°C (crit = +110.0°C)
temp2: +29.8°C (crit = +110.0°C)
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +69.0°C (high = +105.0°C, crit = +105.0°C)
Core 0: +69.0°C (high = +105.0°C, crit = +105.0°C)
Core 1: +68.0°C (high = +105.0°C, crit = +105.0°C)