News Ubisoft, Nvidia, and Inworld AI partnership to produce "Neo NPC" game characters with AI-backed responses

Status
Not open for further replies.

Sleepy_Hollowed

Distinguished
Jan 1, 2017
536
237
19,270
Unless models and ML processors are forever backwards and forward compatible, this is a very expensive tax write off.

I wouldn’t touch this as a dev unless the ML can be ran on python as software emulation on-computer.
 

Notton

Commendable
Dec 29, 2023
903
804
1,260
It's noobisoft, they are going to over promise and under deliver, like they always have.

I am honestly surprised they haven't been under investigation for fraud and stock manipulation, etc.
 

baboma

Respectable
Nov 3, 2022
284
338
2,070
View: https://youtu.be/VTi2l_hjVRs

Dialog for the 1st NPC (Bloom) is pretty on-point, the 2nd one not so much. My question would be how much storywriting work is required to get to this level of conversation for Bloom, keeping in mind that this is just one NPC out of potentially many.

The text-to-speech can probably be done with on-device NPU, but the LLM response at this point would probably require cloud-based connection (read: Internet connection). This would likely entail upsell to a recurring payment model (read: subscription).

Personally I would rather just to have the responses conveyed in text form, and skip the T2S along with facial expression & lip sync effects. It's less work, and believability/immersion wouldn't be any worse than that shown in the demo. Even if the demo's response lag & stuttering were fixed, it would still feel like talking to an animated mannequin.
 
Last edited:
  • Like
Reactions: tamalero

jeffy9987

Prominent
May 31, 2023
72
8
545
wanna bet they list games that sue this as "AAAA" even though it wont really add any benefit to player? (very few games have any reason to talk to an npc)
no AAAA is for live service games AAAAA is for games POWERED BY AI or maby AIAAAA patent pending
 

tamalero

Distinguished
Oct 25, 2006
1,231
247
19,670
View: https://youtu.be/VTi2l_hjVRs

Dialog for the 1st NPC (Bloom) is pretty on-point, the 2nd one not so much. My question would be how much storywriting work is required to get to this level of conversation for Bloom, keeping in mind that this is just one NPC out of potentially many.

The text-to-speech can probably be done with on-device NPU, but the LLM response at this point would probably require cloud-based connection (read: Internet connection). This would likely entail upsell to a recurring payment model (read: subscription).

Personally I would rather just to have the responses conveyed in text form, and skip the T2S along with facial expression & lip sync effects. It's less work, and believability/immersion wouldn't be any worse than that shown in the demo. Even if the demo's response lag & stuttering were fixed, it would still feel like talking to an animated mannequin.
Not to mention has a game function goes.. generating multi branches for conversations would be incredible tiresome if you're just looking to complete a quest and get to the point.

This whole dynamic convos would be somewhat fun for completely random non essential NPCS.
 
Status
Not open for further replies.