Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A point in 3D that is represented with x, y and z, or Vector 3
Basic Scene with a cube
We will create our BabylonJS scene on Babylon Playground. Find out more about the editor on What is playground? chapter.
We will implement the 3D concepts we discussed in How to create a basic 3D scene? chapter.
To create a basic BabylonJS scene, we initialize a scene, create camera, light and a mesh and return the scene object.
Scene object gets engine as the input argument. You don't have to worry about the engine when you are working on the playground but when you are working on your local code sample, we will create the engine as well.
Arc Rotate Camera acts like a satellite in orbit around a target and always points towards the target position. In our case, it is pointing towards the box. Where ever you move your camera, it will still point to towards the box.
Arc Rotate Camera parameters are: name you want to give to your camera, alpha(radians) the longitudinal rotation, beta beta (radians) the latitudinal rotation, radius(the distance from the target position), target position(center where the box will be created as a default), scene(optional argument). In this code example scene is not given as an argument and defaults to the scene object on the playground.
Setting beta to 0 or PI can, for technical reasons, cause problems.
A hemispheric light is an easy way to simulate an ambient environment light. In our case, we are doing the bare minimum, just giving a name to the light and set it's location.
Setting y location to 1 while our main object is in the center (0,0,0) location will move the light above the object.
We are creating a Box Mesh by calling BABYLON.MeshBuilder.CreateBox method with the minimum requirements, a name and an options empty object.
Continue exploring how to create a 3D scene with other libraries
Keep working with BabylonJS and dive deeper into WebXR on the Babylon.js chapter:
Read more on BabylonJS documentation:
Browse community BabylonJS demos:
Lights can change our scenes drastically. Understanding lighting can help you create the effect you intend.
Checkout the example below to see how bump map creates more realistic results.
To see how to map real time tweet location data to your globe using web sockets, check out Tweet Migration project.
WebXR-compatible devices include fully-immersive 3D headsets(VR Headsets) with motion and orientation tracking, Augmented Reality glasses, like HoloLens and MagicLeap ,which overlay graphics atop the real world scene passing through the frames, and AR compatible(ARCore and ARKit supported) handheld mobile phones which augment reality by capturing the world with a camera and augment that scene with computer-generated imagery.
WebXR is a group of standards being implemented by the browsers, which are used together to support rendering 3D scenes to hardware designed for Mixed Reality. Mixed Reality devices are presenting virtual worlds (virtual reality, or VR), or for adding graphical imagery to the real world, (augmented reality, or AR).
The WebXR Device API implements the core of the WebXR feature set, managing the selection of output devices, render the 3D scene to the chosen device at the appropriate frame rate, and manage input, such as controllers and hand interactions.
WebXR Device APIs are replacing the deprecated WebVR API. WebVR API was designed with only VR devices in mind. With the addition of new AR headsets and AR capable handheld devices, the WebVR API is deprecated in favor of WebXR Device APIs, that include AR Modules.
iOS devices such as iPhone and iPad currently does not support WebXR APIs in Safari or Chrome but WebXR Viewer App is available from the App Store:
Introduction
WebXR Device APIs:
Immersive Devices
VR Headsets for Mobile Devices.
Oculus Quest
Augmented Reality(AR) or Mixed Reality(MR) Headsets
WebXR with Mobile Devices
Pokemon Go
WebVR vs WebXR
Virtual Reality on the Web
Mozilla Hello WebXR Demo:
WebXR Features 11: 30 Gamepad API
Input Profiles Library
WebAR Module
Hit Test
How to Get Started Building WebXR Experiences
WebGL
WebXR Libraries
ThreeJS:
A-Frame:
BabylonJS:
React 360:
PlayCanvas:
More on A-Frame
A-Frame Hello World Code
Future of WebXR APIs
WebXR Accessibility and DOM Overlay API
Lighting Estimation Using Computer Vision
Anchors
Layers
Hand Interactions
WebXR Resources 30: 10 How to Get Involved with Immersive Web Working and Community Groups
The basic steps most WebXR applications will go through are:
Query to see if the desired XR mode is supported.
If support is available, advertise XR functionality to the user.
A user-activation event indicates that the user wishes to use XR.
Request an immersive session from the device
Use the session to run a render loop that produces graphical frames to be displayed on the XR device.
Continue producing frames until the user indicates that they wish to exit XR mode.
End the XR session.
Different browsers are implementing and WebXR APIs in different timeframes. Currently Chrome and new Edge browsers have WebXR APIs turned on as default and some of the features are under experimental flags.
You can check the current support status at CanIUse.com.
You can turn on experimental flags by navigating to chrome://flags/ or edge://flags/ and searching for the experimental flag you are looking to enable and choosing enable from the drop down menu.
XRReferenceSpaceType defines how much your user can move in you experience.
XRReferenceSpaceType
Description
Interface
bounded-floor
Similar to the local
type, except the user is not expected to move outside a predetermined boundary, given by the boundsGeometry
in the returned object.
local
A tracking space whose native origin is located near the viewer's position at the time the session was created. The exact position depends on the underlying platform and implementation. The user isn't expected to move much if at all beyond their starting position, and tracking is optimized for this use case.
For devices with six degrees of freedom (6DoF) tracking, the local
reference space tries to keep the origin stable relative to the environment.
local-floor
Similar to the local
type, except the starting position is placed in a safe location for the viewer to stand, where the value of the y axis is 0 at floor level. If that floor level isn't known, the user agent will estimate the floor level. If the estimated floor level is non-zero, the browser is expected to round it such a way as to avoid fingerprinting (likely to the nearest centimeter).
unbounded
A tracking space which allows the user total freedom of movement, possibly over extremely long distances from their origin point. The viewer isn't tracked at all; tracking is optimized for stability around the user's current position, so the native origin may drift as needed to accommodate that need.
viewer
A tracking space whose native origin tracks the viewer's position and orientation. This is used for environments in which the user can physically move around, and is supported by all instances of XRSession
, both immersive and inline, though it's most useful for inline sessions. It's particularly useful when determining the distance between the viewer and an input, or when working with offset spaces. Otherwise, typically, one of the other reference space types will be used more often.
If you are running into issues, check out the scenarios below that might be causing the problem.
Serving your site using http
You would not be able to see your site if you are serving over http. WebXR APIs require secure htttps server.
This project is going to discuss WebXR & AI use cases and work on an hands on project
Slides: https://bit.ly/XRWomenAI
State of WebXR APIs: https://caniuse.com/?search=webxr
Mozilla Developer Network: https://developer.mozilla.org/en-US/docs/Web/API/WebXR_Device_API
Project is built on Immersive Web Co-chair Ada Rose Cannon's AR starter project on Glitch. You can create a free account and make a copy of the AR Starter Kit project linked below by choosing Remix option. This will copy the project on to your account and you can start editing directly.
You can find Ada's XR Template on Github
You can find more WebXR templates on Glitch WebXR Playlist
Add Speech SDK by including the below script in html
Aframe docs: https://aframe.io/docs/
Visual inspector: Ctrl + Alt + i
Primitives are similar to prefabs in Unity. They abstract the core entity-component API to:
Pre-compose useful components together with prescribed defaults
Act as a shorthand for complex-but-common types of entities (e.g., <a-sky>
)
Provide a familiar interface for beginners since A-Frame takes HTML in a new direction
You can see the resulting page on Glitch
Video outline:
00:00 Introduction http://bit.ly/webstreet
07:00 A-Frame VR game examples: https://dseffects.com/vr/games.php
09:30 WebXR Input Profiles: https://github.com/immersive-web/webx...
10:35 Frogger game
15:30 A-Frame Ocean Example: http://codevember-19-ocean.glitch.me/
18:11 Helicopter Model from Sketchfab: https://sketchfab.com/3d-models/bell-...
20:00 3D Assets: https://github.com/Yonet/MixedReality...
20:30 Asset Optimization
21:09 Texture Optimization
23:23 Gltf Viewer: https://gltf-viewer.donmccurd
25:00 Mesh Optimizer library: https://github.com/zeux/meshoptimiz26:15 Mesh Optimizer Options: https://github.com/zeux/meshoptimizer...
28:00 RapidCompact Mesh Optimizer: https://rapidcompact.com/30:55 Scene components overview
33:00 HDRI(High-Dynamic-Range Imaging) component
34:11 VArtiste Texture Painting Tool: https://vartiste.xyz/
36:35 Skies texture: https://www.cgskies.com/
39:00 Orbit Control component settings
42:25 Animation49:00 Street Components
1:00:00 Draw calls
To create a basic scene we need to initialize a scene object. You can think of a scene as a stage that will hold your production. Everything that will be visible to the viewer will be added to the scene.
We need to add a camera object to the scene that will be our viewers perspective. Anything on the scene but not in the view of the camera won't be visible on your canvas.
We also need a light source for our objects to be visible.
Finally, we will create a basic shape(mesh), box.
Find the related links document: bit.ly/webstreet.
00:00 Introduction
02:24 Why are we doing this?
03:51 What are we building in this video?
05:30 What you need for this session. Google Docs: http://bit.ly/webstreet
11:10 Introduction to Glitch: https://glitch.com/
18:10 A-Frame intro: http://aframe.io/
39:16 Adding Orbit Controls Component
40:13 3DStreet component: https://github.com/kfarr/3dstreet
55:15 Mapbox Component: https://github.com/mattrei/aframe-map...
58:45 After manual configuration to position and rotate the street entity: https://glitch.com/edit/#!/17thbikelane
1:19:40 Augmented Reality example
1:11:07 A-Frame Presentation Component: https://github.com/rvdleun/aframe-pre...
11:18 Q&A
Find the links on Resources.
Loading a 3D model is like uploading an image but for 3D models. You can upload one or multiple models as shown in below code sample.
Loading takes time and is asynchronous. When we call the function, web browser gives us back a "Promise" that it will give us a result when the loading is complete. Now we can call the can let the browser know what we want to do when the loading is complete inside the .then(whatToDoWhenModelLoadedFunction). We call this function a callback functions.
Below example loads all the models in the path and changes the position.
1) The postion of the house is changed in the above example by setting an integer value to house1.position.y. Try setting scaling the model by setting it's scale value, ex: house1.scale.x.
2) Console.log the house2 object to see what it is. Save and open the developer tools with the short cut: Option + Command + I on mac and Control + Shift + I on windows. Find the console tab to see the printed house object.
We are creating a Box Mesh by calling BABYLON.MeshBuilder.CreateBox method with the minimum requirements, a name and an options empty object.
A-Frame and Project Resources
A-Frame Documentation: http://aframe.io/
A-Frame VR game examples: https://dseffects.com/vr/games.php
WebXR Input Profiles: https://github.com/immersive-web/webxr-input-profiles
A-Frame Ocean Example: http://codevember-19-ocean.glitch.me/
3D Assets: https://github.com/Yonet/MixedRealityResources#assets
Mesh Optimizer library: https://github.com/zeux/meshoptimizer
Mesh Optimizer Options: https://github.com/zeux/meshoptimizer/tree/master/gltf#options
Gltf Viewer: https://gltf-viewer.donmccurdy.com/
RapidCompact Mesh Optimizer: https://rapidcompact.com/
VArtiste Texture Painting Tool: https://vartiste.xyz/
VArtiste Tutorial:
Skies texture: https://www.cgskies.com/
We need to add an environment and create an XR Experience. This will automaticly add a button to our scene to go into VR mode.
We can also add the WebXR directly to a loaded model without an environment.
WebXR Chrome Dev Tools Testing
Find the code sample on Github
If you are starting a new project, you can use the below projects as a template on github:
Babylon Webpack es6 by Raanan
React Babylon by Brianzinn
Angular Babylon by JohnnyDevNull
Vue Babylon by Beg-in
Or you can follow the Babylon TypeScript Webpack setup tutorial.
You can use your favorite tool like npm or yarn to add babylon packages. Find more information about BabylonJS NPM packages on the documentation.
Introduction by Chris Noring
3D Concepts
Demo: Dinosaur garden
Introduction by Ayşegül Yönet
W3C Immersive Web Group
What is the benefit of Augmented Reality experiences?
How to add Augmented Reality with BabylonJS demo
How to run the demo app locally
How to test AR application with Chrome Developer Tools
What's next for BabylonJS and WebXR
Computer Vision demo with Azure Cognitive Services:
Demo and Resources Links
Code Samples: and
You can sign up for the WebXR Meetup and see past events details here.
Sign up to WebXR YouTube Channel for the future videos.
Node.js download
WebXR Viewer on Apple store
Web Server for Chrome extension
Visual Studio Code(VSCode) download
VSCode Live Server Extension.
TypeScript in 5 minute introductions
Learn X in Y: TypeScript
BabylonJS Tools and Resources
ThreeJS Github
ThreeJS Chrome Dev Tool
ThreeJS Web Editor
ThreeJS Discord
ThreeJS Subreddit
Fundamentals of 3D on the web
World wide web allows anyone share their experiences with the world freely. With the availability of the technologies like WebXR, it is now possible to share immersive experiences on the web as well.
How to work with 3 dimensions, 3D coordinates?
Learn how to develop Mixed Reality experiences on the web using WebXR Device APIs. WebXR Lessons: www.learnwebxr.dev
WebXR Lessons: www.learnwebxr.dev
Wellcome to the introduction to WebXR Lessons. We will talk about:
Short link to WebXR Lessons: www.learnwebxr.dev
Windows Mixed Reality JavaScript Documentation: aka.ms/WebXR
Web is for all and web is always free. As a developer, you will always have the right to your own creation and distribution, which is not the case on any other platform.
Near and far clipping defines the render able area of a scene. Anything before near clipping and after far clipping won't be rendered by your camera, and won't be visible.
Visible area is called frustum.
Where possible, we recommend using glTF (GL Transmission Format). Both .GLB and .GLTF versions of the format are well supported. Because glTF is focused on runtime asset delivery, it is compact to transmit and fast to load. Features include meshes, materials, textures, skins, skeletons, morph targets, animations, lights, and cameras.
Public-domain glTF files are available on sites like Sketchfab, or various tools include glTF export:
Blender by the Blender Foundation
Substance Painter by Allegorithmic
Modo by Foundry
Toolbag by Marmoset
Houdini by SideFX
Cinema 4D by MAXON
COLLADA2GLTF by the Khronos Group
FBX2GLTF by Facebook
OBJ2GLTF by Analytical Graphics Inc
…and many more
When glTF is not an option, popular formats such as FBX, OBJ, or COLLADA are also available and regularly maintained by many libraries.
3D JavaScript engines like ThreeJS and BabylonJS use WebGL to render to Canvas to make it easier for JavaScript developers who are not an expert in computer graphics. WebGL is a is a cross-platform, royalty-free web standard for a low-level 3D graphics JavaScript API, or programmable interface, for drawing interactive 2D and 3D graphics in web pages. WebGL connects your web browser up to your device’s graphics card, providing you with far more graphical processing power than is available on a traditional website.
To use WebGL capabilities, you need a device and a browser that supports it. You can check which browsers support WebGL at caniuse.com by searching for WebGL.
According to caniuse.com, 98.16% of the internet users globally are using WebGL 3D Canvas graphics capable device and browser.
Unlike most Web APIs, WebGL is designed and maintained by the non-profit Khronos Group and not World Wide Web Consortium (W3C). WebXR Device APIs and other related APIs to create a 3D experience like Gamepad API are part of W3C.
To learn more check out Wikipedia, Khronos WebGL Wiki or the official Knronos WebGL Repository.
Field of View (FOV) is the extent of the scene that is seen on the display at any given moment. The value is in degrees.
Read more on wikipedia.
Normal is the direction a face of an object is pointing. It is used to correctly map images as textures to an object.
TODO: Examples...
When we talk about transformations in 3D space, we mean altering the position, orientation, or scale of an object. These transformations can include various operations such as translation (shifting the position), rotation (changing the orientation), shear (distorting the shape), scale (changing the size), reflection (flipping the object), and projection (mapping the 3D point onto a 2D plane).
To perform these transformations, a common technique is to use a transformation matrix. A transformation matrix is a mathematical matrix that stores the information about the desired transformation. By multiplying a Vector3 representing a point(x,y,z position) with the transformation matrix, we can apply the transformation to that point.
When we talk about "applying the matrix to the vector," it means that the transformation matrix is multiplied with the Vector3 point to achieve the desired transformation. The resulting Vector3 will reflect the transformed position, orientation, or scale of the original point in 3D space.
Matrix4 in Three.js docs
OpenGL matrices tutorial
If you are in a 3D scene on a 2D display, you can interact with the objects or buttons on you scene with a mouse or touch event, as you would in any other web app. One difference is the instead of checking if we hit an html element like button, we need to check the 3D position that we are hitting and check if there is any collisions with an object. We do not have a way to directly attach a touch or click event to a 3D object but we can check to see if the ray casted from our touch or click is passing through an object.
The aspect ratio a camera defines the rendered image's width and height ratio using the camera.
You can use the window's inner height and inner width to calculate aspect ratio. If you are not rendering to the entire window, you can choose a specific size for your aspect ratio that would correspond to the container of the renderer.
To create a basic scene we need to initialize a scene object. You can think of a scene as a stage that will hold your production. Everything that will be visible to the viewer will be added to the scene.
We need to add a camera object to the scene that will be our viewers perspective. Anything on the scene but not in the view of the camera won't be visible on your canvas.
We also need a light source for our objects to be visible.
Finally, we will create a basic shape(mesh), box.
Let's create the basic scene using different libraries.
ThreeJS basic scene
Although Three.js creates scene, camera and light as well as the object, the code syntax is little different.
Node.js download
WebXR Viewer on Apple store
Web Server for Chrome extension
Visual Studio Code(VSCode) download
VSCode Live Server Extension.
TypeScript in 5 minute introductions
Learn X in Y: TypeScript
BabylonJS Tools and Resources
ThreeJS Github
ThreeJS Chrome Dev Tool
ThreeJS Web Editor
ThreeJS Discord
ThreeJS Subreddit
Create your first AR & VR applications on the Web
In this section, we will turn our 3D experience from the last section into an immersive experience. We will develop on our local device, laptop or desktop and run the code locally.
You can follow along the tutorial using an online editor. If you want to learn how to develop on your local environment, please follow the below checklist.
Similar to VR experience, you can add AR Button to enable AR experiences. Additionally, you can specify the required and optional AR features your experience will use.
Add a controller: Returns a Group representing the so called target ray space of the controller. Use this space for visualizing 3D objects that support the user in pointing tasks like UI interaction.
When we are hitting a surface, to indicate the surface, we will create a reticle to our scene.
Let's define the onSelect event that we attached to controller. When the select event happen, meaning user decides to place the object and we create it on the chosen location.
Finally in our render function, we will check in every XRFrame if we have an hit-test source to display the reticle on the surface.
To run the code on your device, you have to give access to your camera when prompted.
A hemispheric light is an easy way to simulate an ambient environment light. In our case, we are doing the bare minimum, just giving a name to the light and set it's location.
Setting y location to 1 while our main object is in the center (0,0,0) location will move the light above the object.
Overview of WebXR and BabylonJS features.
BabylonJS is a JavaScript/TypeScript 3D engine that makes creating WebXR experiences easier for the developer. Check out https://www.babylonjs.com/ for more info and code samples. Below is a great introduction to the tools and capabilities of BabylonJS.
WebXR Code Samples: https://doc.babylonjs.com/how_to/webxr_demos_and_examples
AFrame declarative way to create a 3D scene
AFrame, simplifies creating a 3D scene by giving us a way to define our scene as html elements. Under the hood AFrame uses Three.js to do the same thing but allows you to change elements through attributes.
With a frame you can create a scene using <a-scene> html tag and nest the objects tags inside the scene. As a person who is used to working with Canvas and JavaScript, this is confusing to me but I can see why it is easier for some.
Good news is, you can use Three.js to further create interactions for your AFrame scenes.
Only a few loaders (e.g. ObjectLoader) are included by default with three.js — others should be added to your app individually.
Once you've imported a loader, you're ready to add a model to your scene. Syntax varies among different loaders — when using another format, check the examples and documentation for that loader. For glTF, usage with global scripts would be:
Change the onSelect function to load and place the model, instead of the Sphere mesh we were placing previously.
We can add event callbacks for loading manager.
See GLTFLoader documentation for further details.
You've spent hours modeling an artisanal masterpiece, you load it into the webpage, and — oh no! 😠It's distorted, miscolored, or missing entirely. Start with these troubleshooting steps:
Check the JavaScript console for errors, and make sure you've used an onError callback when calling .load() to log the result.
View the model in another application. For glTF, drag-and-drop viewers are available for three.js and babylon.js. If the model appears correctly in one or more applications, file a bug against three.js. If the model cannot be shown in any application, we strongly encourage filing a bug with the application used to create the model.
Try scaling the model up or down by a factor of 1000. Many models are scaled differently, and large models may not appear if the camera is inside the model.
Try to add and position a light source. The model may be hidden in the dark.
Look for failed texture requests in the network tab, like C:\\Path\To\Model\texture.jpg. Use paths relative to your model instead, such as images/texture.jpg — this may require editing the model file in a text editor.
You can simply open up the playground link on your phone or
Few things to note:
Experimenting and changing any code in the playground and clicking on the Run button will run your code. Running the code will not affect any original code in the playground you are currently using. Original code can be restored by refreshing the browser.
You need to save your changes to create a new version of the the code in Playground. That way, you can share the link with anyone.
Title and Version: As stated.
Language: Typescript/JavaScript switch.
Theme: Choose the theme for the playground
Font size: Set the font size in the editor.
Safe Mode: When the checkbox is ticked the playground issues a "leaving the page?" confirmation warning when you try to unload/reload a freshly-edited, un-saved scene.
Editor: The checkbox hides or un-hides the editor portion of the playground.
Full Screen: Makes the render area full screen.
Editor Full Screen: Makes the editor area full screen.
Format Code: Pretty prints the code.
Minimap: Display the minimap of the code editor.
Inspector: The checkbox toggles the playground scene inspector which shows a multitude of variable values.
Metadata: This is where you describe your playground allowing yourself and other to search the playground database for examples of use.
Menu : Contains Run, New, Clear, Save and Zip as submenus.
Code : Bottom Left Corner - switch to Code View and Editor.
Scene : Bottom Right Corner - switch to Scene View.
.
Watch .
is an online editor that you can write your code and view the results. You can edit your code on the left hand side and see the result on the right. You can find more examples by clicking examples button on the top right menu and searching for concepts.
Run : Commands the playground to try to render your scene.
Save : Causes your scene to be permanently stored in the playground's database and it will issue a unique URL for each save. On save you will be asked to complete the metadata so that it can be searched for. Once saved it is a good idea to bookmark the page so you can return to it later. You could then share the URL with others, for example, if it is not working as you expect you can ask a question in the forum along with the link to your playground.
Download : Allows you to download a zip file named sample.zip. Once downloaded and unzipped, you will see a file named index.html
which contains everything necessary to run the code in your browser, including links to external babylon.js and other files.
New : Places a basic createScene()
function into the editor along with code to initialise the scene variable and provide a camera.
Clear : Empties all the code out of the playground editor. You could then paste in any createScene function you are working on locally.
Settings : The Settings button has a sub menu with extra options
Version : Allows and shows your choice of the BABYLON.js framework, either the current stable one or the latest preview version.
Examples : A drop down menu giving examples of playgrounds with a search filter.
Dragging an object in 6 .
Remember to add your .env file to gitignore and never commit your key to Github
We will create our scenes, save and share them on playground.
To create a basic BabylonJS scene, we initialize a scene, create camera, light and a mesh and return the scene object.
Scene object gets engine as the input argument. You don't have to worry about the engine when you are working on the playground but when you are working on your local code sample, we will create the engine as well.
Arc Rotate Camera acts like a satellite in orbit around a target and always points towards the target position. In our case, it is pointing towards the box. Where ever you move your camera, it will still point to towards the box.
Arc Rotate Camera parameters are: name you want to give to your camera, alpha(radians) the longitudinal rotation, beta beta (radians) the latitudinal rotation, radius(the distance from the target position), target position(center where the box will be created as a default), scene(optional argument). In this code example scene is not given as an argument and defaults to the scene object on the playground.
Setting beta to 0 or PI can, for technical reasons, cause problems.