Go to file
Umesh Ramchandani 84ff8f11ed Update readme
2021-02-23 21:46:41 +01:00
.vscode add debug 2021-02-02 15:26:37 +01:00
configs cleanup configs 2021-02-21 23:59:43 +01:00
models add models folder 2021-01-09 14:58:38 +01:00
modules update: 2021-02-18 23:57:12 +01:00
results fix: cuda update 2021-02-17 12:25:02 +01:00
samples add samples 2021-02-06 13:07:28 +01:00
utils minor imporvements 2021-02-22 19:07:33 +01:00
.gitignore minor imporvements 2021-02-22 19:07:33 +01:00
camera_estimation.py fix numerical issues 2021-02-09 10:48:46 +01:00
config.yaml add global orientation to config 2021-02-22 19:07:14 +01:00
dataset.py minor imporvements 2021-02-22 19:07:33 +01:00
Dockerfile minor imporvements 2021-02-22 19:07:33 +01:00
example_camera.py cleanup 2021-02-08 15:33:27 +01:00
example_fit_anim.py minor imporvements 2021-02-22 19:07:33 +01:00
example_fit.py add openpose conf toggle 2021-02-17 21:16:29 +01:00
example_hyper.py update hyper trainer to work with video renderer 2021-02-21 23:58:34 +01:00
example_pose.py add initial temporal learning 2021-02-08 21:03:19 +01:00
example_render_video.py minor imporvements 2021-02-22 19:07:33 +01:00
example_temporal.py fix bug with no model output for some images 2021-02-09 16:10:01 +01:00
LICENSE Initial commit 2020-12-19 18:26:10 +01:00
main.py create main for easier execution 2021-02-05 12:58:29 +01:00
model.py add initial temporal learning 2021-02-08 21:03:19 +01:00
plane.mtl Added Image place 2021-01-23 21:06:00 +01:00
plane.obj Added Image place 2021-01-23 21:06:00 +01:00
preview.png WIP: fixing mapping for camera solver 2021-01-15 22:47:15 +01:00
README.md Update readme 2021-02-23 21:46:41 +01:00
renderer.py update: 2021-02-18 23:57:12 +01:00
requirements.txt add intersection loss 2021-02-17 13:18:15 +01:00
setup.sh add vposer download to script 2021-02-01 23:47:35 +01:00
TODO.md fix: initial camera position for viewer 2021-01-23 12:59:51 +01:00
train_orient.py add extra global orientation step 2021-02-22 19:06:59 +01:00
train_pose.py add extra global orientation step 2021-02-22 19:06:59 +01:00
train.py add extra global orientation step 2021-02-22 19:06:59 +01:00

realtime-body-tracking

Installation

This project is using python version 3.6.0. It is also recommended to use anaconda for managing your python environments.

Downloading SMPL models

The SMPL models can be downloaded from this link. Alternatively just use the following script:

./setup.sh

This should copy and rename the SMPL model to the correct folders. Either way the resulting folder structure should look like this (ignore smplx if you don't intend to use it):

└── models
    ├── smpl
    │   ├── SMPL_FEMALE.npz
    │   ├── SMPL_MALE.npz
    │   └── SMPL_NEUTRAL.pkl
    └── smplx
        ├── SMPLX_FEMALE.npz
        ├── SMPLX_FEMALE.pkl
        ├── SMPLX_MALE.npz
        ├── SMPLX_MALE.pkl
        ├── SMPLX_NEUTRAL.npz
        └── SMPLX_NEUTRAL.pkl

VPoser

To use bodyPrior in the configuration please download vposer and plate it into ./vposer_v1_0 directory in the project root. Vposer can be downloaded from this link after creating an account with SMPL-X link

Mesh intersection

To use intersectLoss in the configuration please pull the github repo. This repo is patched to run on the newer versions of pytorch. Note: It only runs for Linux based operating systems. We had troubles getting it to work on Windows.

Conda Environment

Create a new conda env by typing

conda create --name tum-3d-proj python=3.6

When using VSCode the IDE should automatically use the python interpreter created by conda.

Pip packages

The required pip packages are stored within the requirements.txt file and can be installed via

pip install -r requirements.txt

Exporting imported packages to pip

Since pip is trash and does not save imported packages in a file within the project you have to force it to do so.

pip freeze > requirements.txt

Usage

The project provides sample examples to the usage of the project. These examples can be directly run after installing all required packages. Per default the input data is expected to be located in the ./samples/ folder. We would recommend you to get all the frames from the source video in png format and pass the video through OpenPose to get the results exported in .json format if you want to try your own samples.

Input data

The input is expected to be OpenPose keypoints and output images (required for previews). These files should be placed to the samples folder the foldername and the name format of the samples can be configured within the config.yaml.

Selecting config file

By default the config is expected at ./config.yaml this can be changed to any value by setting the CONFIG_PATH environment variable.

CONFIG_PATH=/path/to/myconfig.yaml python example_fit.py

OpenPose