posted by 나무꾼! 2013. 4. 25. 12:34

* "mabinogi heroes postmotem" by HyunWoo Ki (Senior Engineer)

  (http://heroes.nexon.com/)

 

They used z-axis bone mirroring to implement below monster who can walk on ceiling. (Wall climbing?)

but they still process AI routine on the floor.

 

http://www.thisisgame.com/board/view.php?id=1516363&category=8202

  

* "How to integrate AI and Animation using Prediction Model" by Simon Mack (CTO) from Natural Motion

 

He explained highly detailed technique about interaction between AI and Animation.

 

AI or Player <-> Motion Controller <-> Animation Engine

 

They generate sampling data from blended motion with some parameters in morpheme.

Example sampling data is looks like below

 

sample

speed

position delta distance

1

0

1.9

2

0.11111

2

3

0.22222

2.1

4

0.33333

2.2

5

0.44444

2.3

6

0.55555

2.4

7

0.66666

2.5

8

0.77777

2.6

9

0.88888

2.7

10

0.99999

2.8

11

1

2.9

 

They used the data table in runtime to predict exact position delta in AI side .

 

In this approach, There is some limitations like below

1. This is not a global model.

2. Offline cannot cope with dynamic animation.

   for example, Physical animation is not supported. eg. ragdoll

 

* "CUDA programming" by Yongcheon Yu (Freelancer)

only 13~15 SM can be used for CUDA process.

but it couldn't be used for general purpose.

limitations on control flow statements (if, else)

cannot use recursive routine

clock speed is lower than CPU

hard to debug (but NVIDIA provide VisualStudio plug-in for debugging)


Generally, we can expect 3 times better performance for light map bake

but it was not useful to process A* algorithm..