当前位置:首页 > 数码 > 微软进军增强现实-带来沉浸式体验-领域-AR (微软进军中国)

微软进军增强现实-带来沉浸式体验-领域-AR (微软进军中国)

admin4个月前 (05-06)数码40

Introduction

Information about human user poses can be mapped to virtual articulated representations. For instance, when engaging with a virtual reality environment, a human user may make movements that are reflected by a similarly postured avatar within the virtual environment. The user's real-world poses may be translatedinto the poses of a virtual articulated representation via a pre-trained model, which can be trained to output poses for the same virtual articulated representation used for final rendering. However, there are instances where the system may need to display representations that are not a direct match for the true pose. As an example, the user may opt for different cartoonish characters with different body proportions, bone structures, and/or other aspects. Figure 1 illustrates human user 100 within real-world environment 102. As shown, the human user's pose is applied to the articulated representation 104. In other words, as the human user moves in the real-world environment, the corresponding movement is translated to the articulated representation 104 in virtual environment 106. However, there may be times when the virtual articulated representation should differ from the representation that the model was trained on. AR

Concurrenthumanposeestimatesforvirtualrepresentation

To address this challenge, Microsoft patent "Concurrenthumanposeestimatesforvirtualrepresentation" introduces techniques for simultaneously estimating the pose of the model articulated representation and the target articulated representation. More specifically, the computing system receives positional data that details one or more body part parameters of the human user, at least partially based on inputs from one or more sensors. This may include, for example, output from a head-mounted display's inertial measurement unit, as well as output from suitable cameras. The system simultaneously maintains one or more mapping constraints that relate the model joint representation to the target joint representation, such as joint mapping constraints. Based at least in part on the positional data and mapping constraints, a pose optimization engine simultaneously estimates the model pose of the model joint representation and the target pose of the target joint representation. Once estimated, the target joint representation may be displayed as the virtual representation of the human user, along with thetarget pose. The pose optimization engine can be trained using training positional data that has groundtruth labels for model joint representation. However, the training positional data may lack groundtruth labels for the target joint representation. In this manner, the techniques detailed in the invention can advantageously allow for accurate recreation of a human user's real-world pose, without the computationally expensive training that would be required for each different possible target joint representation. For instance, when engaging with a virtual environment, the user may choose from a variety of different avatars used to represent them in the virtual environment, and the user may dynamically change their appearance during the course of the session. New target joint representations may be added to the menu of representations available to the user without the computational cost of retraining the model for the specific representation. The techniques detailed in the invention can provide the technical advantage of reduced computational resource consumption while accurately recreating the human user's real-world pose, and allowing that accurate pose to be applied to any of a number of different target articulated representations. This is done by simultaneously estimating the poses for both the model and the target. Figure 2 illustrates an example method 200 for posing virtual representations of the human body. In 202, positional data is received that details one or more body part parameters of the human user, based on inputs from one or more sensors. In 204, one or more mapping constraints are maintained that relate the model joint representation to the target joint representation. Figure 4 shows an example model joint representation 400. As discussed above, the target joint representation is rendered for display in the virtual environment and its pose is output by the pose optimization engine. For example, the target joint representation may have any suitable appearance and proportions, and may have any suitable number of limbs, joints, and/or other movable body parts. The target joint representation may resemble non-human animals, fictional characters, or any suitable avatar. The model joint representation and the target joint representation are related via one or more mapping constraints 402. One or more mapping constraints may include joint mapping constraints 404. For a joint of the target joint representation, the joint mapping constraint specifies a set of one or more joints in the model joint representation. For example, model joint representation 400 includes a plurality of joints, two of which are labeled 403A and 403B, which correspond to the shoulder and elbow joints. Target joint representation 104 includes similar joints 405A and 405B. Accordingly, a plurality of mapping constraints may include different joint mapping constraints for joints 405A and 405B of the target joint representation, indicating that the joints map to joints 403A and 403B of the model representation. The joint mapping constraints may further specify weights for each of the joints in the model joint representation that map to the target joint representation joint. For example, when a single joint in the model joint representation maps to a particular joint of the target joint representation, the model joint may have a weight of 100%. When two model joints map to a target joint, the two model joints may have weights of 50% and 50%, for instance.

Benefits

The techniques detailed in the invention can provide one or more of the following benefits: Accurately recreate a user's real-world pose using a model joint representation. Allow for the application of the user's real-world pose to a variety of target joint representations, including representations with different appearances, proportions, and/or numbers of limbs, joints, and/or other movable body parts. Reduce the computational resources required for training a model for each different target joint representation. Enable users to dynamically change their virtual representation during the course of a virtual reality session.

Applications

The techniques detailed in the invention may be used in a variety of applications, including: Virtual reality games and simulations Virtual social environments Virtual try-on applications Motion capture systems

Conclusion

The techniques detailed in Microsoft patent "Concurrenthumanposeestimatesforvirtualrepresentation" provide a novel and efficient way to recreate a human user's real-world pose in a virtual environment. By simultaneously estimating the poses for both the model and the target, the techniques can accurately recreate the user's pose, even when the target representation is different from the representation that the model was trained on. This can provide a more immersive and engaging experience for users of virtual reality applications.

AR技术在娱乐领域的应用情况如何

AR技术在娱乐领域的应用情况如何?AR技术在娱乐领域的应用情况如何?近年来,增强现实(AR)技术在娱乐领域广泛应用,带来了令人惊喜的新体验。 从电子游戏到主题公园,以及演唱会和体育比赛,AR技术带来的互动性和沉浸感深受消费者欢迎。 在电子游戏领域,AR技术已成为游戏开发者的新宠。 AR技术可以为游戏增加更真实的场景和更多的元素。 例如,一个看起来平淡无奇的街道,通过AR技术却可以变成一个充满奇幻色彩的世界。 玩家可以通过AR技术将游戏中的人物和场景带到现实环境中,与现实世界进行互动。 在《PokemonGo》这款游戏中,用户可以在现实世界中发现虚拟的口袋妖怪,并使用智能手机进行捕捉和战斗。 与电子游戏不同的是,主题公园使用AR技术为游客创造真实的故事体验。 例如,在迪士尼主题公园中,AR技术可以为游客提供沉浸式的迪士尼公主体验,例如《小美人鱼》和《灰姑娘》。 游客可以通过AR技术与虚拟公主进行互动,并沉浸在这些故事的情节中,为游客带来更多的乐趣和惊喜。 除了游戏和主题公园,AR技术还在演唱会和体育比赛等领域得到了广泛的应用。 例如,在一些大型音乐会上,歌手可以通过AR技术将虚拟的舞台背景带入现实世界。 在NBA比赛中,AR技术可以为球迷带来更多的互动和视觉体验。 球迷可以通过AR技术与球星进行互动,并参与各种有趣的游戏和竞赛。 总之,AR技术的应用给娱乐领域带来了新的革命性变化。 它不仅给游戏和主题公园带来了更多的沉浸式体验,还为演唱会和体育比赛等现场表演带来更多的互动性。 由于AR技术不断发展和创新,人们对它在娱乐领域的应用情况充满了期待。

ar全景和vr全景区别

AR全景和VR全景的主要区别在于它们提供的沉浸式体验和对现实世界的改变程度。 AR全景,即增强现实全景,是一种将虚拟信息叠加到真实世界的技术。 它通过用户的智能手机或特殊设备,将图像、动画、3D模型等虚拟元素融入到用户的实际视野中,以增强用户对现实世界的感知。 在AR全景中,用户可以在真实环境中看到虚拟对象,这些对象可以随着用户的移动和视角变化而做出相应的调整,从而保持与真实世界的交互性。 例如,通过手机APP,用户可以在家中的客厅放置一个虚拟的沙发或画作,并实时查看其效果。 VR全景,即虚拟现实全景,则是一种完全由计算机生成的模拟环境。 它利用头戴式设备(如VR头盔)将用户完全带入一个三维的虚拟世界,提供全方位的沉浸式体验。 在这个虚拟世界里,用户可以自由探索、互动并产生身临其境的感觉。 与真实世界相比,VR全景中的一切都是计算机生成的,因此用户可以体验到超越现实的场景和情境。 例如,用户可以通过VR头盔游览遥远的外太空或深海底部,感受那些在现实生活中难以企及的环境。 从技术层面看,AR全景和VR全景的实现方式也有所不同。 AR全景通常依赖于用户的智能手机或特殊设备上的摄像头来捕捉真实世界的图像,并将虚拟元素叠加其上。 而VR全景则需要更为复杂的头戴式设备和高级图形处理器来生成并呈现全方位的虚拟环境。 总的来说,AR全景和VR全景在提供沉浸式体验和对现实世界的改变程度上存在显著差异。 AR全景是在真实世界中添加虚拟元素,而VR全景则是创造一个全新的虚拟世界供用户探索。 这两种技术各有特点和应用领域,为用户带来了前所未有的交互和体验方式。

免责声明:本文转载或采集自网络,版权归原作者所有。本网站刊发此文旨在传递更多信息,并不代表本网赞同其观点和对其真实性负责。如涉及版权、内容等问题,请联系本网,我们将在第一时间删除。同时,本网站不对所刊发内容的准确性、真实性、完整性、及时性、原创性等进行保证,请读者仅作参考,并请自行核实相关内容。对于因使用或依赖本文内容所产生的任何直接或间接损失,本网站不承担任何责任。

标签: AR

“微软进军增强现实-带来沉浸式体验-领域-AR (微软进军中国)” 的相关文章

现实-头显视频透视功能的缺陷揭秘-AR-Pro-野心遭遇-VR-苹果-Vision (现实头像图片)

现实-头显视频透视功能的缺陷揭秘-AR-Pro-野心遭遇-VR-苹果-Vision (现实头像图片)

IT之家1月30日消息,苹果VisionPro头显的媒体评测已经出炉,其主要亮点功能之一是Passthrough(视频透视)功能,该功能利用外部摄像头将周围环境实时投影到头显显示屏上,旨在让用户在...