Skip to main content

· 13 min read
Kevin Glass

At Rune, we're helping devs make casual multiplayer games using JS that are played by groups of friends on our iOS + Android app. There's been a lot of excitement among devs about using AI to power wacky gameplay so we added AI to the Rune SDK. To dogfood this, I decided to build out some AI games to explore what interesting gameplay could be achieved using LLMs. So I set myself a challenge. Can I build 7 multiplayer AI games in 7 days?

Here's what happened, the MIT-licensed source code and what I learnt during the process. I've recorded a little video playing all the games to give a quick feel for what they're like.

Day 1 - Storyteller AI

Play! | Kick Off | Time Lapse | Post-Mortem | Code

The first game, Storyteller AI, has the players collaboratively write an epic story together by suggesting simple terms that the AI then weaves into the story. There's no winner, or rather everyone is a winner with a brand new story created. The fun comes when people start suggesting strange and wacky terms which are attributed to them in the story.

What did I learn?

  • The first game was always going to find the edges of the implementation and naturally the development ran into a few bugs which were fixed along with writing the game on the first day.
  • Prompting an AI to write a story needs guidelines otherwise it just goes no where. It's important to set out clearly that there will need to be a conclusion in a fixed number of iterations.
  • Players struggle to think of suitable terms, having the AI suggest some seems like an AI talking to an AI, but playtesters appreciated having the suggestions.
  • The OpenAI API can take 10+ seconds to respond now and again, you have to account for that possibility in design.

Day 2 - Dating AI Game

Play! | Kick Off | Time Lapse | Post-Mortem | Code

Definitely my favorite idea from the original list, the Dating AI Game is based on those old blind dating shows from the 70s and 80s. Players take the part of contestants attempting to win a date by answering their potential date's questions in the most fun way. The AI plays the part of the question asker and the overall narrator. Players really do seem to enjoy this one, particularly entering the most rude answer they can think of.

What did I learn?

  • Players will always try to be rude / outlandish - make the choice how you want the AI to respond, don't leave it to chance!
  • Asking the AI to execute 'three rounds of questions and then a conclusion' doesn't always result in 3 rounds - sometimes 2, sometimes 8. You need to be very strict and clear when the game should finish. Even if all your examples are 3 rounds, the AI may not pick up on this.
  • Coming up with varied questions is difficult for the AI, especially in a constrained contextual scenario like a old school dating show when driven by examples. It often repeats the same questions in different sessions. You can get better results by explaining the reason for the example, e.g. The example below is for structure not for content. Please come up with as varied questions and responses as possible.

Day 3 - Find the AI

Play! | Kick Off | Time Lapse | Post-Mortem | Code

With Find the AI, I'm adapting the good old Werewolf into a simple Rune game. The AI generates a simple question, players answer it and so does the AI. The AI has been instructed to sound as human as possible including adding typos and slang. The players are then presented all the answers and have to guess which one is the AI. Having the AI act human is fun, but watching players trying to sound like an AI is really great.

What did I learn?

  • Making an AI sound human isn't easy.
  • AI responses tend to be elaborate and use great grammar and spelling. Humans, especially those using mobile keyboards, don't do that. They generally use short answers and they make mistakes (auto correct is everyone's friend right?).
  • Asking the AI to sound more human with Use slang or abbreviations to sound more human results in an obvious AI that is intentionally making a mistake every time and throwing in slang for no reason.
  • Giving the AI a "dial" gives the best results, use slang 50% of the time, make an auto-correct mistake 10% of the time seemed to give better more human responses but didn't make it into the final game.

Day 4 - The AI Times

Play! | Kick Off | Time Lapse | Post-Mortem | Code

A novel idea that was conceptualized by the team. Players are presented with a random image and asked to provide a short caption. The generative AI is then prompted to create a tabloid style front page including headline and tag line. The players then vote on their favorite story to choose a winner. The combination of abstract, quirky images and the imagination of the players results in some really wonderfully silly front pages.

What did I learn?

  • Telling the AI what to value as input is really important. In this case I am feeding the AI with the player's caption and a description of the image that was provided to the player (generated offline). Initially the AI was producing essentially the same story for all players because they all had the same image.
  • Being explicit about the weight to apply to the inputs helped a lot, e.g. The player's input should be considered 10x more important than the content of the image when writing the articles

Day 5 - GIF vs AI

Play! | Kick Off | Time Lapse | Post-Mortem | Code

Probably the player favorite so far is GIF vs AI, a twist on the popular Death by AI game. The AI generates a life-threatening scenario and a GIF is chosen to represent it. Players then have to respond with how they'll attempt to survive by selecting a GIF. Finally, the AI evaluates the scenario and the provided survival GIF to determine if the player survives and selects a GIF to represent the outcome. Adding in the GIFs makes the game faster to play and the AI interpretation of the GIF can lead to unintended and funny outcomes.

What did I learn?

  • The Tenor generated GIF descriptions that are provided as meta-data are often of just the first frame and don't include details of what happens in the animation. Though not accurate, in my case this leads to fun mistakes!
  • The AI appears capable of generating "good" search terms based on longer textual descriptions. While AI summarization is expected to be good I hadn't expected it to be able to extract the relevant terms from the text to get a reasonably accurate GIF result from a search.

Day 6 - AI Art Judge

Play! | Kick Off | Time Lapse | Post-Mortem | Code

When adding AI capabilities to the SDK, I was quite happy that we added image analysis in as part of the same single API. The AI Art Judge game uses this to become an art critic. Players suggest a 'thing' to be drawn. They're then given a fixed time to draw a picture of the randomly-picked item. Once everyone is done, the AI evaluates the drawings based on the original input and provides an overly artsy and funny critique. It turns out AI is an expert in being pompous.

What did I learn?

  • Asking the AI which is the best "...." doesn't result in an understandable result for players. The AI often doesn't pick the objectively better representation of the item.
  • I ran out of time, but it may have been better to ask what is in this image and then get a match weight comparing that output with the original item description.
  • Image analysis isn't expensive as long as your images are smaller than 512x512 (generally plenty for this sort of game). Any bigger and costs go up quickly.

Day 7 - AI Emoji Interview

Play! | Kick Off | Time Lapse | Post-Mortem | Code

In the final game of the 7 days, I wanted to try out how AI interprets emojis. The AI Emoji Interview game has the AI acting an interviewer for a made up and crazy job opportunity. Players must take part in the interview but can only provide emojis as answers. The AI interprets the emojis as answers and eventually selects a player to get the job. Switching from text to emoji speeds the game up and players can still to get their point across easily.

What did I learn?

  • AI can understand emoji as easily as normal text. It can even interpret the meaning of combined emoji's in most common cases.
  • Similar to generating questions above, generating "crazy" jobs is difficult for AI. It tends to generate similar jobs even when asked to explore a broad search space. In hindsight it may have been better to generate 100 crazy jobs offline and used these as input rather than relying on a dynamically-generated job title.

General Observations

It's been an interest week of making games with lots of experimentation using AI through the Rune SDK. All the games were playable and fun to a level but could have been taken further and refined more. Here's some general impressions of using AI in games:

  • Examples are king! Providing examples to the AI as part of the initial prompt keeps the structure of the interaction manageable.
  • Structured output, e.g. asking the AI to produce JSON, seems to greatly decrease the appropriateness of the responses and limits the pseudo-creativity of the output.
  • The common advice is to repeat important parts of prompts towards the end of the initial prompt. This didn't seem to have much effect this week. Adding explicit rules at the end of the prompt that re-enforced some of the initial prompt had a larger impact.
  • Calling out early in the prompt that we're playing a game seems to result in the AI having a decent grasp immediately that there is likely to be a series of interactions. It also seems to set a tone for responses.
  • Using the system vs user role does have an impact. system should be used to set the rules of the game and the general structure. user should be used to capture player input. Prompting a set of runes using user role doesn't work as well as using system. Prompting player input in system role works find as long as you're prefixing the input with the user entered... but this becomes awkward quickly. Using system for player input also leaves more room open for players providing input to try and change the rules.

Preparation

Trying to create a game per day a game means getting everything in place beforehand. As I was always told, Prior Preparation Prevents Pretty Poor Performance. In this case I started with 25 ideas for AI games, whittled that down to 7 good ones and built wireframes to explore the design. Then our designer, the formidable Shane, took them and produced beautiful graphics in the associated themes.

With the designs in hand I was ready for building 7 AI games in 7 Days.

Rules

If you're going to have a challenge, you might as well set out the rules at the start so here goes:

  1. 7 Days is consecutive working days - weekends off!
  2. A day is a normal 8-hour working stint. No pulling an all-nighter and claiming the game was built in a day.
  3. Games start from the Rune standard TypeScript template.
  4. Code reuse between games is done via copy-paste of common code. No building a library and then building 7 games on it.
  5. All games should run in the Dev UI and on the production Rune app.
  6. Of course, all the games should be built using Rune!

How does it work?

Let's take a quick look at how the Rune SDK exposes AI to developers. It's quite simple to work with and that's what made it possible to churn out these games. You can check out the full documentation for more details.

Prompting the AI

To send a prompt to the AI, we invoke Rune.ai.promptRequest() from anywhere in our logic code, passing a collection of messages for the AI to interpret. Since the AI API is stateless, we need to pass it the past messages and the new ones in the request if we want it to understand a full conversation.

As you can see below, the prompt request can take both text and image prompts. In the Rune SDK, we only support passing in data:// URI's for images because we'd want to ensure the games are always available to the players, hence not allowing dependencies on external images.

Rune.ai.promptRequest({
messages: [
{ role: "user",
content: { type: "text", text: "What is Rune.ai?" } },
{ role: "user",
content: { type: "image_data", image_url: dataUri } },
],
})

Getting AI responses

The response to the prompt is received through a callback in the game logic as shown below. The response text is provided back to the logic, which can then update game state.

Rune.initLogic({
...
ai: {
promptResponse: ({ response }) => {
console.log(response)
},
},
})

As you can tell, it's a quite minimal API. The idea was to make it as easy as possible to incorporate LLMs into the gameplay of Rune games. Behind the scenes, the SDK handles the requests/responses, exponential backoff in case of request failures, etc. It's all fairly standard stuff. The real magic is in Rune's netcode for syncing players and of course in the AI itself.

Conclusion

It's been a fun, if tiring, week writing a game a day. The games themselves have come out pretty fun and I'm looking forward to how they do when they hit the 100,000's of players available on the Rune platform. Pretty much all the guidance on writing prompts generally applied to using it for games - though it is somewhat challenging to keep the AI from expanding outside the rules you want to enforce for the game. I can't wait to see what kind of AI-powered games the future holds.

Any thoughts or suggestions for how we could have made the games better? Join us on the Rune Discord for a chat!

Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 3 min read
Amani Albrecht

🛠️ App Improvements

  • Unveiled our new and improved login experience: Upgraded flow resolves captcha issues, improves multi-account handling, and remembers email IDs to speed up logins! 🚀🔑
  • Released a complimenting onboarding experience, giving encouraging gems & guiding users through name customization, avatar setup, and enabling notifications ✨🎉
  • Updated our login logic to be even simpler—removed the birthday picker and now you just input your age directly 🎂
  • Stopped showing the old locked view for outdated clients. With the new login experience, we've eliminated the need for an update required screen 🔒
  • Removed the "Rune Again" overlay now that our name transition has fully settled in ✨
  • Enhanced the game lists with pull-to-refresh functionality across all variants 🕹️🔄
  • 💎 Restyled our gem icons, animations, and balance design; setting the stage for bigger changes ahead!
  • Launched game comment translations, enabling everyone to automatically see all game comments in their own language 💬
  • Improved multi-language and diacritic support in all searches, enhancing global usability 🌍

🪲Bug Fixes

  • Prevented a race condition: now if a user enters their age too quickly, the app handles auth preparation more smoothly to avoid error screens 🏁
  • Fixed guest verification to work without restarts after incomplete email verifications and streamlined age confirmation flow to prevent back navigation errors.
  • Removed a login issue by removing an unnecessary fallback that placed error and done handlers incorrectly, preventing uncaught errors.
  • Tweaked the splash screen logic: It stays visible longer if the app isn't fully initialized and hides more aggressively once ready, reducing bugs!
  • Updated logging to deprioritize uncompressed messages and ensure foreground activities are captured more effectively 📲🔍
  • Updated localUser sync when app user changes, fixing a bug where display name and avatar updates weren’t immediately reflected. Now, they’re correct on next login 👤
  • Caught and removed a potential source of network failures to reduce crashes and improve stability 🌐
  • Implemented possible fixes for production crashes related to 'TypeError: Cannot read property 'nodes' of undefined' with defensive programming.
  • Addressed log spam issues by fixing getLinkKey errors that occur when user share links are not yet populated upon login or logout 🔗
  • Fixed a crash by ensuring navigation containers correctly reference existing routes, especially during login scenarios.
  • Upgraded react-native-screens from 3.34.0 to 3.35.0 to address a known crash issue.
  • Resolved an issue with images in our expo updates, allowing us to re-enable auto-updates for iOS 🍏

💻 SDK Improvements

  • Introduced world time that syncs milliseconds since epoch across server and clients, making it possible for you all to build daily changes and seasonal events directly into your games!
  • Improved the warning message for actions taken after game over within the SDK, enhancing clarity and guidance 🚨
  • Updated the allowed package size for games, accommodating larger game files 📦
  • Added the ability to playtest AI games directly in the SDK, complete with types for developers to specify image data for Open AI integration 🤖
Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 15 min read
Kevin Glass

At Rune, we love all types of games—from 2D to isometric to 3D—and we want you to build with the tools you prefer on our platform. Creating 3D games introduces some extra pieces, like camera controls and character movement, which takes time to make feel natural, even in single-player settings. The examples below serve as a straightforward reference for building 3D games with Three.js on the Rune platform.

You can give the demo a play in the tech demos section of the docs.

Approach

Building a 3D game is more time consuming than a 2D game, though the multiplayer components remain essentially the same when using the Rune SDK. In this tech demo, we'll cover key aspects of building a multiplayer 3D game:

  • Rendering, shadows, and lighting
  • Model loading
  • Character and camera controllers
  • Input and virtual joystick
  • Multiplayer support with the Rune SDK

We’ll follow a similar approach to previous tech demos, where player inputs are exchanged between the client and logic layers. The logic layer manages all updates to the game world, including collisions. On the client side, we ensure our 3D world mirrors the logical world, interpolating between positions and rotations for smooth visuals.

With fantastic assets from Kenney.nl, we’ll build a small scene for players to explore together.

Rendering, Shadows, and Lighting

Setting Up the Renderer

Below is the configuration for a high-quality yet mobile-friendly Three.js renderer. Key configuration options include:

  • Power Preference - Maximizes graphics performance on mobile devices, though it can drain battery life more quickly. Adjust this if rendering frequency can be reduced.
  • Anti-aliasing - Smooths jagged edges in 3D rendering, significantly improving visual quality.
  • Tone Mapping - Adjusts color depth and shading. For vibrant colors, ACESFilmicToneMapping offers a rich, deep effect.
  • Shadows - Adds realism by enabling shadows, though it can be demanding on low-end devices. Shadows must be enabled separately for both meshes and lights.
// Main Three.js scene setup
const scene: Scene = new Scene()

// Renderer setup
const renderer = new WebGLRenderer({
powerPreference: "high-performance",
antialias: true,
})

// Configure tone mapping and shadow rendering
renderer.toneMapping = ACESFilmicToneMapping
renderer.shadowMap.enabled = true
renderer.shadowMap.type = PCFShadowMap

// Add renderer to the DOM
renderer.setSize(window.innerWidth, window.innerHeight)
document.body.appendChild(renderer.domElement)

// Set background color and add fog for depth
const skyBlue = 0x87ceeb
scene.background = new Color(skyBlue)
scene.fog = new FogExp2(skyBlue, 0.02)

// Render the scene at configured FPS
setInterval(() => {
render()
}, 1000 / RENDER_FPS)

This setup creates a DOM element (canvas) for the renderer and attaches it to the document. We set a sky-blue background with fog to add depth and enhance visual appeal.

We use a setInterval loop at 30 FPS, though requestAnimationFrame() could be used for a higher frame rate. To optimize performance on low-end devices, we cap it at 30 FPS.

The render function updates game elements as follows:

// Update different game elements
updateInput()
const localPlayer = getLocalCharacter3D()
if (localPlayer) {
getShadowingLightGroup().position.x = localPlayer.model.position.x
getShadowingLightGroup().position.z = localPlayer.model.position.z

updateCamera(localPlayer)
}
updateCharacterPerFrame(1 / RENDER_FPS)

// Render the game with Three.js
renderer.render(scene, getCamera())

The input, camera, lighting, and character updates shown here will be discussed later in this article.

Lights and Shadows

In addition to the renderer, we need to light the scene and enable shadows. Three.js includes built-in shadow mapping, which we configure below.

  • Ambient Light - Provides basic visibility across the scene but doesn’t add shading.
  • Directional Light - Illuminates models from a specific direction, adding shading; it also serves as the source of shadows.
// Basic ambient lighting for visibility
ambientLight = new AmbientLight(0xffffff, 1)
getScene().add(ambientLight)

// Create directional light for shadows
lightGroup = new Object3D()
directionalLight = new DirectionalLight(0xffffff, 1)
directionalLight.position.set(-3, 3, 3)
lightGroup.add(directionalLight)
lightGroup.add(directionalLight.target)
getScene().add(lightGroup)

directionalLight.castShadow = true
directionalLight.shadow.mapSize.width = SHADOW_MAP_SIZE
directionalLight.shadow.mapSize.height = SHADOW_MAP_SIZE
directionalLight.shadow.camera.near = SHADOW_MAP_NEAR_PLANE
directionalLight.shadow.camera.far = SHADOW_MAP_FAR_PLANE
directionalLight.shadow.camera.left = -SHADOW_MAP_BOUNDS
directionalLight.shadow.camera.right = SHADOW_MAP_BOUNDS
directionalLight.shadow.camera.top = SHADOW_MAP_BOUNDS
directionalLight.shadow.camera.bottom = -SHADOW_MAP_BOUNDS

We configure the shadow generator’s frustum to control the extent of the shadows. Shadow mapping renders the scene from the light’s perspective, creating a texture to display shadows. By moving the directional light with the player, shadows are rendered only locally, optimizing performance.

Model Loading

Three.js simplifies model loading, especially when using Kenney’s GLTF models. Here’s a loader function for GLTF models:

function loadGLTF(url: string): Promise<GLTF> {
console.log("Loading: ", url)

return new Promise<GLTF>((resolve, reject) => {
gltfLoader.load(
url,
(model) => {
model.scene.traverse((child) => {
child.castShadow = true
child.receiveShadow = true
})
resolve(model)
},
undefined,
(e) => {
reject(e)
}
)
})
}

To ensure all models can cast and receive shadows, we traverse each loaded model. If textures aren’t automatically loaded, we apply them manually as shown below.

export function loadTexture(url: string): Promise<Texture> {
return new Promise<Texture>((resolve, reject) => {
textureLoader.load(
url,
(texture) => {
texture.colorSpace = SRGBColorSpace
resolve(texture)
},
undefined,
(e) => {
reject(e)
}
)
})
}

export function applyTexture(obj: Object3D, texture: Texture) {
obj.traverse((node) => {
if (node instanceof Mesh) {
node.material = new MeshLambertMaterial({ map: texture })
}
})
}

Character and Camera Controller

Creating a responsive 3D character and camera controller can be an intricate task. For instance, our recent release, MeatSuits, features a controller inspired by Roblox. This demo distills the basics of that controller.

First, we set up a basic perspective camera:

const lookAt: Vector3 = new Vector3(0, 1, 0)
let targetAngleY: number = 0
let targetAngleZ: number = Math.PI

const camera: PerspectiveCamera = new PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
0.1,
150
)

We then update the camera’s position and target each frame to ensure smooth motion:

export function updateCamera(targetCharacter: Character3D) {
const cameraHeight = 2
const cameraDistance = 4
const cameraSoftness = 0.2
const cameraTargetHeight = 1

const { x, y, z } = targetCharacter.model.position

const targetPosition = new Vector3(cameraDistance, 0, 0)
targetPosition.applyEuler(new Euler(0, -targetAngleY, -targetAngleZ))
targetPosition.add(new Vector3(x, y + cameraHeight, z))

camera.position.lerp(targetPosition, cameraSoftness)
lookAt.lerp(new Vector3(x, y + cameraTargetHeight, z), cameraSoftness)
camera.lookAt(lookAt)
}

It’s simple, right? The real trick with the Roblox-style controller is that joystick movement is interpreted based on the camera’s current viewpoint. So pushing up on the joystick always moves forward in the view, and pushing left always moves left relative to the camera.

But how do we turn corners, then? As we’ll see in the input section below, there’s a subtle tweak to the rules applied on the client side.

Input and Virtual Joystick

While the demo may seem to use only a couple of inputs, there are actually several elements that make the controller feel "right." First is the camera view adjustment—this allows players to drag the view using a second finger (separate from joystick control). For us, this is configured in the renderer code:

renderer.domElement.addEventListener("touchstart", (e) => {
mouseX = e.targetTouches[0].clientX
mouseY = e.targetTouches[0].clientY
mouseDown = true
})
renderer.domElement.addEventListener("touchend", () => {
mouseDown = false
})
renderer.domElement.addEventListener("touchmove", (e) => {
if (mouseDown) {
const dx = e.targetTouches[0].clientX - mouseX
const dy = e.targetTouches[0].clientY - mouseY
mouseX = e.targetTouches[0].clientX
mouseY = e.targetTouches[0].clientY

rotateCameraZ(dy * 0.01)
rotateCameraY(dx * 0.01)
}
})

Pretty simple, right? Dragging a finger on the renderer’s background adjusts the camera rotation.

Next, we handle the "real" input through the joystick and jump button, retrieving the joystick state (and following key presses if testing on the web):

const joystickState = getJoystickState()
const currentControls: Controls = {
x: 0,
y: 0,
cameraAngle: getCameraAngle(),
jump: isKeyDown(" ") || jump,
}

// generate controls to send to the server
if (isKeyDown("a") || joystickState.x < -DEAD_ZONE) {
currentControls.x = isKeyDown("a") ? -1 : joystickState.x
}
if (isKeyDown("d") || joystickState.x > DEAD_ZONE) {
currentControls.x = isKeyDown("d") ? 1 : joystickState.x
}
if (isKeyDown("w") || joystickState.y > DEAD_ZONE) {
currentControls.y = isKeyDown("w") ? 1 : joystickState.y
}
if (isKeyDown("s") || joystickState.y < -DEAD_ZONE) {
currentControls.y = isKeyDown("s") ? -1 : joystickState.y
}

Note that we’re also recording the current cameraAngle as part of the controls. This allows the movement code in the logic (see below) to interpret the inputs correctly.

Next, we add a small tweak that lets us turn corners. When moving left or right, we move perpendicular to the camera angle, but we also gently turn the camera along with the movement:

// if the controls indicate left/right motion then rotate the camera
// slightly to follow the turn
if (currentControls.x < 0) {
rotateCameraY(-CAMERA_ROTATE * (Math.abs(currentControls.x) - DEAD_ZONE))
}
if (currentControls.x > 0) {
rotateCameraY(CAMERA_ROTATE * (Math.abs(currentControls.x) - DEAD_ZONE))
}

This means that if the player is running left or right, the camera tries to turn to follow them. If they keep moving in those directions, they’ll eventually run in a circle since movement is relative to the camera angle.

Once we have the controls for this frame, we need to update the logic accordingly. Rune prevents network flooding by allowing only 10 updates per second, so we avoid sending updates too frequently or when control inputs haven’t changed:

// only send the control update if something has changed
if (
lastSentControls.x !== currentControls.x ||
lastSentControls.y !== currentControls.y ||
lastSentControls.cameraAngle !== currentControls.cameraAngle ||
lastSentControls.jump !== currentControls.jump
) {
// only send the control update if we haven't sent one recent, or if:
//
// * We've stopped
// * We've jumped
//
// Need to do those one's promptly to make it feel like the player
// has direct control
if (
Date.now() - lastSentTime > CONTROLS_SEND_INTERVAL ||
currentControls.jump || // send jump instantly
(currentControls.x === 0 &&
currentControls.y === 0 &&
(lastSentControls.x !== 0 || lastSentControls.y !== 0))
) {
Rune.actions.update(currentControls)
lastSentControls = currentControls
lastSentTime = Date.now()
jump = false
}
}

However, if we stop moving or jump, we want to send that input immediately—this ensures the player feels direct control over their character. Any delay in stopping would make the player feel "lag".

Now we have input, models, a renderer, lighting, and shadows! All that’s left is to make the demo multiplayer.

Multiplayer Support

Let’s start by looking at the client code. As we’ve seen in previous tech demos, there’s a callback that Rune uses to notify us of game state updates: onChange:

Rune.initClient({
onChange: ({ game, yourPlayerId }) => {
// build the game map if we haven't already
buildGameMap(game.map)

for (const char of game.characters) {
// theres a new character we haven't got in out scene
if (!getCharacter3D(char.id)) {
const char3D = createCharacter3D(char)
if (char.id === yourPlayerId) {
localPlayerCharacter = char3D
}
}

// update the character based on the logic state
updateCharacter3DFromLogic(char)
}
for (const id of getCurrentCharacterIds()) {
// one of the scene characters has been removed
if (!game.characters.find((c) => c.id === id)) {
removeCharacter3D(id)
}
}
},
})

That’s all of our onChange code. Here’s how we apply it:

  • Build the game map if we haven’t already (see below)
  • Ensure any characters defined in the logic have a 3D model in our world
  • Update any models in our world based on the logic state
  • Remove any models in our world that don’t exist in the logic

On the logic side, we have a bit more to consider. Our game state is lightweight, containing the player’s input, the game world, and the character moving within it:

// the game state we store for the running game
export interface GameState {
// the gamp map for collisions etc
map: GameMap
// the controls reported for each player
controls: Record<PlayerId, Controls>
// the characters in the game world
characters: Character[]
}

When we start up, we add a character to the world for each player:

setup: (allPlayerIds) => {
const state: GameState = {
map: createGameMap(),
controls: {},
characters: [],
}

// create a character for each player
for (const id of allPlayerIds) {
addCharacter(id, state)
}
return state
},

Similarly, when players join or leave, we simply update the characters in the world:

events: {
playerLeft: (playerId, { game }) => {
// remove the character that represents the player that left
game.characters = game.characters.filter((c) => c.id !== playerId)
},
playerJoined: (playerId, { game }) => {
// someone joined, add a new character for them
addCharacter(playerId, game)
},
},

In the logic's update() loop, we move the characters around the world based on their inputs. The key detail here is that the player's inputs are interpreted relative to their camera angle:

// if the player is trying to move then apply the movement assuming its not blocked
if (controls && (controls.x !== 0 || controls.y !== 0) && character) {
// work out where the player would move to
const newPos = getNewPositionAndAngle(controls, character.position)
// check the height at that location
const height = findHeightAt(game, newPos.pos.x, newPos.pos.z)
const step = height - character.position.y
// if we can step up the height or its beneath us then move the player
if (step < MAX_STEP_UP) {
// not blocked
if (step > 0) {
// stepping up
character.position.y = height
}
character.position.x = newPos.pos.x
character.position.z = newPos.pos.z
// record the speed so the client side know hows quickly to interpolate
character.lastMovementSpeed =
Math.sqrt(controls.x * controls.x + controls.y * controls.y) *
MOVE_SPEED
}
character.angle = newPos.angle
} else if (character) {
character.lastMovementSpeed = 0
}

We’ll cover the map height calculations in the next section, but the key function here is getNewPositionAndAngle(). This function takes the player’s input and determines the new position and angle relative to the player’s camera angle:

export function getNewPositionAndAngle(
controls: Controls,
pos: Vec3
): { pos: Vec3; angle: number } {
const dir = getDirectionFromAngle(controls.cameraAngle)
const result = {
pos: { ...pos },
angle: 0,
}
result.angle = -(controls.cameraAngle - Math.atan2(-controls.x, controls.y))
if (controls.x < 0) {
result.pos.x += dir.z * MOVE_SPEED_PER_FRAME * Math.abs(controls.x)
result.pos.z -= dir.x * MOVE_SPEED_PER_FRAME * Math.abs(controls.x)
}
if (controls.x > 0) {
result.pos.x -= dir.z * MOVE_SPEED_PER_FRAME * Math.abs(controls.x)
result.pos.z += dir.x * MOVE_SPEED_PER_FRAME * Math.abs(controls.x)
}
if (controls.y < 0) {
result.pos.x -= dir.x * MOVE_SPEED_PER_FRAME * Math.abs(controls.y)
result.pos.z -= dir.z * MOVE_SPEED_PER_FRAME * Math.abs(controls.y)
}
if (controls.y > 0) {
result.pos.x += dir.x * MOVE_SPEED_PER_FRAME * Math.abs(controls.y)
result.pos.z += dir.z * MOVE_SPEED_PER_FRAME * Math.abs(controls.y)
}
return result
}

This ensures that the player's controls are always relative to their camera’s direction, achieving the Roblox-style character controller we wanted.

Rendering the Map

What would the world be without a map to explore? In this tech demo, our map is a simple grid of heights that generates blocks for players to explore. Our game map definition is as follows:

// wrapper for the content of each tile - useful
// if we want to add color or other attributes later
export type GameMapElement = number
// the actual game map is an array of tiles. We only
// specify the height if its non-zero to keep the size down
export type GameMap = GameMapElement[]

Since the map is simply an array of 1x1 blocks, determining the height at any given location is highly efficient:

export function getHeightAt(map: GameMap, x: number, z: number) {
x = Math.floor(x)
z = Math.floor(z)
return map[x + z * GAME_MAP_WIDTH] ?? 0
}

This approach is useful because we’re using a fairly brute-force collision detection on the server. We determine the height at a player’s position by sampling around the player and selecting the maximum height. Each time the player attempts to move, we calculate the height at the new location and compare it to the player’s current height:

  • If it’s lower, the player needs to fall.
  • If it’s the same, there’s no change on the y-axis.
  • If it’s slightly higher, the player should step up onto the block.
  • If it’s significantly higher, the player can’t move to that location.

You can see this logic in the update() function, which calls our sampling function findHeightAt, as shown below:

function findHeightAt(game: GameState, x: number, z: number) {
let maxHeight = 0
const characterSize = 0.5
const step = characterSize / 5
for (
let xoffset = -characterSize / 2;
xoffset <= characterSize / 2;
xoffset += step
) {
for (
let zoffset = -characterSize / 2;
zoffset <= characterSize / 2;
zoffset += step
) {
maxHeight = Math.max(
maxHeight,
getHeightAt(game.map, x + xoffset, z + zoffset)
)
}
}

return maxHeight
}

Finally, we want to visualize the game map in the 3D world for players to explore. Since the game map is part of the game state, it’s also accessible to the client. At the start of the tech demo, we call buildGameMap():

export function buildGameMap(map: GameMap): void {
// cycle through any blocks that are defined and create
// a simple box with the wall texture at the right height
for (let x = 0; x < GAME_MAP_WIDTH; x++) {
for (let z = 0; z < GAME_MAP_HEIGHT; z++) {
const height = getHeightAt(map, x, z)
if (height) {
const geometry = new BoxGeometry(1, 1, 1)
const material = new MeshLambertMaterial({ map: wallTexture })
const cube = new Mesh(geometry, material)
cube.castShadow = true
cube.receiveShadow = true
cube.scale.y = height
cube.position.x = x + 0.5
cube.position.y = height / 2
cube.position.z = z + 0.5
getScene().add(cube)
}
}
}
}

As you can see, we cycle through the locations with a defined height and create a textured box to represent each height level. This provides players with a bare-bones world to navigate.

With Three.js, building a 3D game is accessible for any web developer. Adding multiplayer with the Rune SDK is also straightforward. Hopefully, the code in this tech demo covers most cases where a game needs a well-rendered 3D world and familiar controls as a foundation for something exciting!

Looking forward to seeing the 3D games you build on Rune!

Want to learn more? Join us on Discord for a chat!

Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 3 min read
Amani Albrecht

🛠️ App Improvements

  • Implemented design changes to the game details screen, making it look even better and easier to navigate ✨
  • 🔄 Added pull-to-refresh in the choose game screen inside rooms, serving up new recommendations to gamers each time!
  • 🎨🛒 Improved our purchasing UI with better visuals and a smoother flow between avatar options.
  • Updated our "choose game" UI for better alignment when favoriting and easier game selection 🎮👌
  • Upgraded a bunch of navigation pathways and flows throughout the app, making Rune feel more polished 🌟
  • 🔴 Added small and sleek red dots to encourage gamers to customize their avatar and names!
  • Improved the way our app does over-the-air updates so everyone can get the newest designs & material seamlessly 🧵

🪲Bug Fixes

  • Went on a bug-busting hunt this month! Tracked down and prevented a plethora of bugs in voice chat and rooms 💥🐛
  • Refactored Rune's alert code and inadvertently fixed a few app crashes! Shout out to Denis 🚀
  • Fixed the Rooms tabs by moving them back inside the header, similar to the search layout so the app is a cohesive experience throughout!
  • Updated the game share aspect ratio to square, ensuring it looks better and fits on all screens 🖼️
  • Made sure that all comments from a blocked or reported user immediately hide after refresh 🚫
  • Disabled the unlock room button in matching rooms to keep the gaming experience between you and your new friend!
  • Added ignoring 'rooms ending' events in the room if the call hasn’t started yet, avoiding false error reports 📞
  • Adjusted our emoji picker for perfect visual alignment, eliminating jumping when selecting or deselecting on Android 😊
  • Fixed a few cases where room ends weren't notifying the app properly, busting a few tricky bugs!
  • Resolved an issue where gem totals weren't updating after a name change 💎
  • Fixed an issue with game version selection, making sure that in dev game versions are not available for normal users. Thanks @iamlegend235 in our Discord for reporting it.

💻 SDK Improvements

  • Added an isNewGame flag to stateSync event that sets to true whenever there's a new game session (start of new game, restart) allowing devs to more easily handle game restarts 🕹️
  • Updated our persistence code to bust a bug where players leaving wouldn't trigger the persisted state in the Dev UI!
  • Prevented game errors by disallowing Rune.actions from being called inside update functions or other actions.
  • Refined our logs to now include whether a user is a player, spectator, or unknown for all client-side messages 🔍
Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 3 min read
Amani Albrecht

🛠️ App Improvements

  • 🏠 Pushed the new games tab to be the landing home page, boosting your games to the forefront for new gamers.
  • Upgraded the change game UI inside rooms to show curations just like on the home page 🖼️
  • Added the ability for gamers to favorite your games so they can easily keep coming back for more 💗
  • Finally brought the whole app into the beautiful new Rune redesign with updated settings and onboarding pop-ups visuals! 🎨
  • Unveiled mutual friend suggestions—a simple addition that dramatically improved friend request rates 👥✨
  • 🎮 Added all your games directly onto your public dev profile screens, making it easy for everyone to see your catalog of games.
  • Upgraded all list scrolling animations on Android, making the app feel more polished!
  • Allowed reporting other users for toxic comments or other public actions to maintain our positive gaming community 🚫💬
  • 👁️ Slight visual tweaks to the total plays display on your games to clearly showcase their popularity!
  • Added more channels for gamers to share your games directly, like Instagram and Snapchat 📣
  • Upgraded our over-the-air update logic, making it easier than ever for gamers to update the app—sometimes seamlessly without even noticing 🚀
  • Added new logging to track exactly how and where gamers launch your games, enhancing our homepage improvements 📊

🪲Bug Fixes

  • Resolved a tricky crash on Android 14 during background calls caused by stricter microphone permissions on Android’s end 📱
  • Addressed more Android 14 issues where push notifications weren't opening rooms as intended 🔔
  • 🤝 Fixed a bug where the match feedback screen reappeared if the app was closed and reopened during its display.
  • Improved typing in Rune rooms on Android—no more letter skipping or input delays ✍️
  • To prevent the keyboard from blocking games, it now dismisses automatically whenever any pop-up or modal screen opens ⌨️
  • Fixed a few issues on the new home page, improving its robustness and preventing crashes when changing orientation from landscape to portrait 🔄
  • Updated our chat logic to prevent the app from crashing when no sticker apps are available for sending GIFs!
  • Adjusted our emoji picker for perfect visual alignment, eliminating jumping when selecting or deselecting on Android 😊
  • Made visual fixes to landscape games to accommodate all different screen sizes and phone types 📐
  • Improved the display count on game details—now all your friends who recently played fit neatly in the box 👯
  • 🌍 Caught and fixed some translation errors in all the updated app copy.
  • Fixed the occasional white flashing on the room join & made it less jarring.
  • Improved friend suggestions by fixing an animation glitch optimized querying for mutual friends to improve performance 👥
  • 🎧 Prevented exceptions that were being thrown when accessing room sound audio files, enhancing stability.

💻 SDK Improvements

  • Introduced the Dev Dashboards 🥳 https://dash.rune.ai/
  • Refactored game error logging in preparation for sharing all this info with you all 🖥️
  • 🌐🛡️ Correctly ignores blob requests if they are to the same domain as currently allowed ones, ensuring streamlined data handling.
  • Bumped rune-eslint to the latest version (2.0.1 to 2.0.2) to fix the "Rune is not defined" issue that @Pixel Pincher was experiencing!
  • Unveiled the latest dev UI with improved visuals and enhancements to prevent flashing!
Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 4 min read
Kevin Glass

There are a growing number of great games on Rune being played by millions of players every day. It's time to take a look at one of them of them - Duck Wars, that was part of the Rune Open Source Grants program. This incredible game was created by the talented developer, Ethan.

Play Duck Wars Now

What’s the Game?

A classic implementation of BattleShip with a fun twist using rubber duckies as your targets. It's light hearted fun but fits the platform perfectly as a game that you can enjoy while still chatting with your friends. The art style and sound effects are just perfectly matched to the game play.

For those who don't know battleships in Duck Wars you place your ducks in the bath in any arrangement you like. The opponent does the same. You then take turns in taking pot shots and each other's ducks. The first one to blast all those pesky ducks wins.

What’s Great About It?

Things that seem to work about Duck Wars on Rune (good tips for other devs):

  • Well known classic game
  • Bright and obvious graphical style
  • New twist and style on the old classic
  • Easy controls for mobile

Developer Interview

Ethan kindly agreed to answer a few questions for us on his experience making games on Rune.

How long have you been building games?

Before developing Duck Wars, I had only built a few very small games in Unity occasionally over the years, so my game development experience is quite limited.

What gave you the idea for the game?

The idea for Duck Wars came from my girlfriend, who really loves ducks! Initially, I wanted to create a 1v1 multiplayer game, and combining that with a duck theme ultimately led to the concept of a duck-themed battleship game.

How long did the game take to build?

It took about two months to complete the game, though I worked on it off and on during that time. Progress was steady but varied depending on my schedule with school.

What was the most fun bit of the game to develop?

The most enjoyable part of the development was definitely implementing multiplayer using the Rune SDK. Having never worked with multiplayer before, it felt almost like magic to see how seamlessly it integrated with React. After laying the groundwork for the game, adding multiplayer functionality and seeing it come to life was incredibly satisfying.

Did you expect the game to be successful?

Honestly, I didn’t expect the game to be particularly successful. It started as a small, fun project I worked on the side without anticipating much. However, being able to see thousands of players engage with and enjoy the game has been a great surprise and incredibly rewarding!

What would you do different next time?

One thing I'd do differently next time is to incorporate regular user testing throughout the development process. Being able to get continuous feedback from players would have helped identify and address many of the issues I faced during development early on, improving both the development experience and the final product.

How did you find Rune to work with?

The experience of working with Rune has been really great! The documentation was clear and easy to follow. Whenever I had questions or encountered issues with the SDK, I could reach out to the developers on Discord and they would promptly address them. The developers also sought feedback on my experience with the SDK and asked for suggestions on how they could improve the development experience, which I really appreciated!

Anything else you'd like to say?

Be sure to check out my other game on Rune, called Tap Party!

What Do the Players Think?

Here's some of the thousands of player's comments on the game

My favorite game by far!

such a fun game with nice background music! I love this

Best game ever!

It's clear that Duck Wars is well loved! Ethan has built a great game and entertained a huge number of players worldwide!

If you’d like to talk about the game, learn how it was built, or build your own, drop by our Discord.

Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 9 min read
Kevin Glass

At Rune, we want you to be able to use game development tools that you love with our platform. With this in mind, we’ve adapted the tutorial game from the popular framework Phaser to be multiplayer on Rune.

Approach

Phaser is wonderfully powerful as a game library, and one of its key concepts is putting everything into the scene graph. This is fantastic for a single player game since the physics/collision can happen on the client side where the scene graph lives. However, when you approach multiplayer (with any framework) the game needs to be able to run its physics both on clients and validating server. With this in mind in this tech demo we’ll move the physics into the logic of the game and use a separate library to manage it.

Outside of this the Phaser framework can be used as normal.

Client Side

To anyone who's used Phaser before this will look pretty familiar. For those who haven't this is setting up a Phaser runtime and renderer and loading the assets that will be used to render the game:

export default class TutorialGame extends Phaser.Scene {
preload() {
// preload our assets with phaser
this.load.image("sky", "assets/sky.png")
this.load.image("ground", "assets/platform.png")
this.load.image("star", "assets/star.png")
this.load.image("bomb", "assets/bomb.png")
this.load.spritesheet("dude", "assets/dude.png", {
frameWidth: 32,
frameHeight: 48,
})
}
}

const config = {
type: Phaser.AUTO,
width: window.innerWidth,
height: window.innerHeight,
scene: TutorialGame,
scale: {
mode: Phaser.Scale.ScaleModes.FIT,
},
}

new Phaser.Game(config)

Here's the first difference to a normal Phaser application. Since we're going to be using Phaser for the rendering only (the physics will be happening in the game logic) we're going to add a mapping table that will convert physics object on the server to the client side scene graph elements:

  physicsToPhaser: Record<number, Phaser.GameObjects.Sprite> = {}
lastSentControls: Controls = {
left: false,
right: false,
up: false,
}

You can also see lastSentControls above. Since Phaser is providing the input from the player and we need to send that to the logic, we'll record the controls we sent last time. We want to avoid sending the controls more often than needed to avoid wasted networking communications by making sure we only send the inputs when they change.

Next up we have the Rune integration. We initialize the Rune SDK with a call back function that tells us when game state is changing. In this case this means when our physics objects have been created, updated or deleted. When we get this notification, we're going to scan through the state and update the Phaser rendering to match. First, we locate each physics body in the phaser world:

 // for all the bodies in the game, make sure the visual representation
// exists and is synchornized with the physics running in the game logic
for (const body of physics.allBodies(game.world)) {
const rect = body.shapes[0] as physics.Rectangle

const x = Math.ceil(
(body.center.x / PHYSICS_WIDTH) * window.innerWidth
)
const y = Math.ceil(
(body.center.y / PHYSICS_HEIGHT) * window.innerHeight
)
const width = Math.ceil(
(rect.width / PHYSICS_WIDTH) * window.innerWidth
)
const height = Math.ceil(
(rect.height / PHYSICS_HEIGHT) * window.innerHeight
)

let sprite = this.physicsToPhaser[body.id]

If we don't have a sprite for the body yet, we create the right one based on the type of body we've been given:

// if a sprite isn't already created, create one based on the type
// of body
if (!sprite) {
if (body.data && body.data.star) {
const size = Math.ceil(
(rect.bounds / PHYSICS_WIDTH) * window.innerWidth
)
sprite = this.physicsToPhaser[body.id] = this.add
.sprite(x, y, "star")
.setDisplaySize(size * 2, size * 2)
} else if (body.data && body.data.player) {
// create the player and associated animations
sprite = this.physicsToPhaser[body.id] = this.add
.sprite(x, y, "dude")
.setDisplaySize(width, height)

this.anims.create({
key: "left",
frames: this.anims.generateFrameNumbers("dude", {
start: 0,
end: 3,
}),
frameRate: 10,
repeat: -1,
})

this.anims.create({
key: "turn",
frames: [{ key: "dude", frame: 4 }],
frameRate: 20,
})

this.anims.create({
key: "right",
frames: this.anims.generateFrameNumbers("dude", {
start: 5,
end: 8,
}),
frameRate: 10,
repeat: -1,
})
} else {
sprite = this.physicsToPhaser[body.id] = this.add
.sprite(x, y, "ground")
.setDisplaySize(width, height)
}
}

Finally, once the sprite is definitely in the world we update it to match the body position based on what the logic has given us:

// update the sprites position and if its a player the animation
sprite.x = x
sprite.y = y
if (body.data?.player) {
const controls = game.controls[body.data?.playerId ?? ""]
if (controls) {
if (controls.left) {
sprite.anims.play("left", true)
} else if (controls.right) {
sprite.anims.play("right", true)
} else {
sprite.anims.play("turn", true)
}
}
}

The final step is pass the input from the phaser side into the logic so we can update the physics model. First we record the input, we have on screen controls which we can listen to:

const left = document.getElementById("left") as HTMLImageElement
const right = document.getElementById("right") as HTMLImageElement
const jump = document.getElementById("jump") as HTMLImageElement

left.addEventListener("touchstart", () => {
gameInputs.left = true
})
right.addEventListener("touchstart", () => {
gameInputs.right = true
})
left.addEventListener("touchend", () => {
gameInputs.left = false
})
right.addEventListener("touchend", () => {
gameInputs.right = false
})
jump.addEventListener("touchstart", () => {
gameInputs.up = true
})
jump.addEventListener("touchend", () => {
gameInputs.up = false
})

Then in the Phaser update if the inputs have changed, we pass them to our logic through a Rune action:

update() {
// As with the physics we don't want the controls to be processed directly in the
// the client code. Instead we want to schedule an action immediately that will update
// the game logic (and in turn the physics engine) with the new state of the player's
// controls.
const stateLeft = gameInputs.left
const stateRight = gameInputs.right
const stateUp = gameInputs.up

if (
this.lastSentControls.left !== stateLeft ||
this.lastSentControls.right !== stateRight ||
this.lastSentControls.up !== stateUp
) {
this.lastSentControls = {
left: stateLeft,
right: stateRight,
up: stateUp,
}
Rune.actions.controls(this.lastSentControls)
}
}

And that's our client done!

Logic Side

On the logic side, we're going to maintain a propel-js physics models that represents our world in the game state. We'll update this each loop and that state will be passed back to the Phaser client to render.

First, we'll setup some game state containing the physical world and state of each players controls, essentially what we need to update the world.

export const PHYSICS_WIDTH = 480
export const PHYSICS_HEIGHT = 800

export interface GameState {
world: physics.World
controls: Record<PlayerId, Controls>
}

export type Controls = {
left: boolean
right: boolean
up: boolean
}

type GameActions = {
controls: (controls: Controls) => void
}

declare global {
const Rune: RuneClient<GameState, GameActions>
}

Next we'll initialize the Rune SDK and configure the world to have our players, platforms and stars:

Rune.initLogic({
minPlayers: 1,
maxPlayers: 4,
setup: (allPlayerIds) => {
const initialState: GameState = {
world: physics.createWorld({ x: 0, y: 800 }),
controls: {},
}

// phasers setup world but in propel-js physics
physics.addBody(
initialState.world,
physics.createRectangle(
initialState.world,
{ x: 0 * PHYSICS_WIDTH, y: 0.2 * PHYSICS_HEIGHT },
0.5 * PHYSICS_WIDTH,
0.05 * PHYSICS_HEIGHT,
0,
1,
1
)
)
physics.addBody(
initialState.world,
physics.createRectangle(
initialState.world,
{ x: 0.75 * PHYSICS_WIDTH, y: 0.4 * PHYSICS_HEIGHT },
0.5 * PHYSICS_WIDTH,
0.05 * PHYSICS_HEIGHT,
0,
1,
1
)
)
physics.addBody(
initialState.world,
physics.createRectangle(
initialState.world,
{ x: 0.5 * PHYSICS_WIDTH, y: 0.6 * PHYSICS_HEIGHT },
0.5 * PHYSICS_WIDTH,
0.05 * PHYSICS_HEIGHT,
0,
1,
1
)
)
physics.addBody(
initialState.world,
physics.createRectangle(
initialState.world,
{ x: 0.5 * PHYSICS_WIDTH, y: 0.9 * PHYSICS_HEIGHT },
1 * PHYSICS_WIDTH,
0.3 * PHYSICS_HEIGHT,
0,
1,
1
)
)

// create a player body for each player in the game
for (const playerId of allPlayerIds) {
const rect = physics.createRectangleShape(
initialState.world,
{ x: 0.5 * PHYSICS_WIDTH, y: 0.5 * PHYSICS_HEIGHT },
0.1 * PHYSICS_WIDTH,
0.1 * PHYSICS_HEIGHT
)
const footSensor = physics.createRectangleShape(
initialState.world,
{ x: 0.5 * PHYSICS_WIDTH, y: 0.55 * PHYSICS_HEIGHT },
0.05 * PHYSICS_WIDTH,
0.005 * PHYSICS_HEIGHT,
0,
true
)
const player = physics.createRigidBody(
initialState.world,
{ x: 0.5 * PHYSICS_WIDTH, y: 0.5 * PHYSICS_HEIGHT },
1,
0,
0,
[rect, footSensor]
) as physics.DynamicRigidBody
player.fixedRotation = true
player.data = { player: true, playerId }
physics.addBody(initialState.world, player)

initialState.controls[playerId] = {
left: false,
right: false,
up: false,
}
}

// create a few stars to play with
for (let i = 0; i < 5; i++) {
const rect = physics.createCircleShape(
initialState.world,
{ x: i * 0.2 * PHYSICS_WIDTH, y: 0.15 * PHYSICS_HEIGHT },
0.04 * PHYSICS_WIDTH
)
const star = physics.createRigidBody(
initialState.world,
{ x: i * 0.2 * PHYSICS_WIDTH, y: 0.15 * PHYSICS_HEIGHT },
10,
1,
1,
[rect],
{ star: true }
) as physics.DynamicRigidBody
physics.addBody(initialState.world, star)
}

return initialState
}

As seen above, for each body, we set user data indicating the type of body it should be rendered as. This game state will immediately be sent back to our client, which will create sprites in the Phaser scene graph and position them accordingly..

Next, we need to process the input action we provided from the client. This is as simple updating our game state to know which controls a player is pressing:

actions: {
controls: (controls, { game, playerId }) => {
game.controls[playerId] = controls
},
},

The final step of our update loop is update the physics model based on the controls provided from the player clients:

update: ({ game, allPlayerIds }) => {
// each loop process the player inputs and adjust velocities of bodies accordingly
for (const playerId of allPlayerIds) {
const body = game.world.dynamicBodies.find(
(b) => b.data?.playerId === playerId
)
if (body) {
if (game.controls[playerId].left && !game.controls[playerId].right) {
body.velocity.x = -100
} else if (
game.controls[playerId].right &&
!game.controls[playerId].left
) {
body.velocity.x = 100
} else {
body.velocity.x = 0
}

// check if we're on the ground
if (body.shapes[1].sensorColliding) {
if (game.controls[playerId].up) {
body.velocity.y = -600
}
}
} else {
console.log("Body not found")
}
}

// propel-js likes a 60fps game loop since it keeps the iterations high so run it
// twice since the game logic is configured to run at 30fps
physics.worldStep(60, game.world)
physics.worldStep(60, game.world)
}

Above we can see that we apply velocities directly to the bodies in propel-js based on the controls the player have provided. We're also using a foot sensor to determine if the player is on the ground and hence if they can jump. One other note here is a nuance of propel-js, our game logic is running at 30fps but the physics model works best at 60fps so we simply run two updates.

There you have it, a multiplayer version of the Phaser sample with the Rune SDK. It takes a little bit of rethinking of the model but we can make use of a lot of power of Phaser!

Want to know more? Why not drop by the Discord and have a chat?

Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 2 min read
Amani Albrecht

🛠️ App Improvements

  • Unveiled the new home (games 🎮) tab, now featuring game carousels with expandable curation lists like "Puzzle" or "Action" games!
  • Revamped our game details page with updated visuals, new play and matching buttons, social proof of how many people love your game, and many more improvements! 🤝
  • Added in fun room sounds that pop 💥 so you can easily know when someone leaves, enters, or is reconnecting even without looking at your phone!
  • Shuffled avatar features around in the editor to look better and make more sense 🎨
  • 🖼️ Now clicking an avatar anywhere in the app shows the user's profile — quick and easy!
  • Upgraded our main page logic: now it auto-scrolls back to the top when you re-tap the tab’s button on the loaded screen 👆

🪲Bug Fixes

  • 👥 Updated the friend suggestion logic to recognize when you've added someone as a friend elsewhere in the app.
  • Fixed a crash that was happening when auto scrolling to the top on the friends tab if you're already there and have no friends.
  • Busted a small bug where the "friends" label occasionally wasn't showing on friend profiles 🐛
  • 🎤Fixed some voice chat start errors by fixing the feedback screen logic!
  • Updated our TikTok social link in the app so everyone can stay up to date on all things Rune 🔗

💻 SDK Improvements

  • Built out our behind-the-scenes game tracking to prep for some exciting new developer features coming soon! 🌟
  • Fixed the dev UI mobile layout to account for bottom toolbars on some devices & adjusted the minimum dimensions on landscape 📱
  • Added Vue game template built by oats 🚀
  • Updated example user avatars to ensure you’re inspired by the newest content!
  • Vite plugin now includes logic that allows importing not only external package but also files from external package (somePackage vs somePackage/innerFile). This was done by Pixel Pincher! 🥳
Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 3 min read
Kevin Glass

When you’re building any client software, it’s useful to know what types of hardware your end users have – doubly so with experience-centric software like games. For the types of games I like to build, even the lowest-end desktop hardware has been more than enough for many years. However, for mobile game platforms like Rune, the types and power of devices that end users have can dramatically affect the player experience.

In this article, we’ll look at the types of devices and their capabilities found in the Rune user base. We’ve taken the top 30 most popular mobile devices on Rune (accounting for about 2 million users) and broken the data down by processor, GPU, screen size, memory, and release date.. As you can see below, in mobile game development, there’s still a huge range of capabilities to account for.

Screen Size

The graph above shows the screen resolutions in device-independent pixels. There’s a huge variety of screen sizes in use, going all the way down to significant numbers (40k) of users with screens as low as 375x667. Likewise, the top end has over 50k users with 2399x4973 screens. Responsive design is key.

Memory

The spread of onboard memory is also wide, going as low as a single GB. The top end is quite low compared to very modern devices, maxing out at 8 GB. This, of course, is only in the top 30 devices in a much bigger user base, but it gives you an idea of what the games need to run on.

Processor and Graphics

The following charts show the spread of CPUs and GPUs on the devices playing your games today.

Processor

Graphics

Analyzing the graphs above, we can see there are essentially two types of devices being used:

  • Octa-core CPU, Mali/PowerVR GPU class devices. These are reasonably powerful and will cope with most Rune games very well.
  • Quad-core CPU, Adreno GPU class devices. These are the budget devices that we see so many in Gen Z having due to the lower cost. These are the ones that you need to target to get maximum playtime for your games.

Release Year

One final piece of information: the release year of the devices in the top 30.

This explains the other data: the majority of devices in the top 30 are 4–5 years old.

Hopefully, the data here will help focus and tailor your game development efforts to get the maximum playtime from our player base.

Does this align with what you’ve seen? Want to know more? Why not drop by the Discord and have a chat?

Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!

· 4 min read
Kevin Glass

At Rune, the majority of the games on the platform are multiplayer. This is largely because we provide an SDK that enables JavaScript developers to build multiplayer experiences very easily, and our player base has come to expect it. Of course, as mentioned in Modern Game Networking Models, this means we focus on making the backend networking something special.

There are a lot of ways of making games multiplayer, from hot seat to shared screen and of course networking itself. Even in networking, there are multiple models to choose from each of which is suitable for a different type of game or programming complexity.

If you’re building a network layer for a single game or a bunch of very similar games then choosing the network model that’s the easiest and satisfies those game constraints is the best move.

However, at Rune, we’re pretty opinionated about a single model that works for all cases, predict-rollback. We need to provide a single common framework for all the games on Rune and so we focus on one networking model that supports the massive variety of games on the platform.

Predict-Rollback

In Modern Game Networking Models we talked in a bit of detail about how predict-rollback works. In summary, we essentially let all clients continue moving forward predicting the current game state based on the inputs they know about. If another client provides a new input (via the authoritative server) that occurs before the game time the current client is at, we roll-back the game state, apply the input, and then re-predict the current state.

So why do we think predict-rollback is the future of networking games and the best fit for a generic networking framework?

  • Some great games have used it to provide excellent multiplayer experiences, like Rocket League and Street Fighter. They also do an amazing job of hiding the rollback/changes when they occur.
  • It works for all cases, whether it's turn-based, RTS, or faster-paced twitch games; predict-rollback provides a stable, consistent approach. Even in turn-based games, where there should be no rollbacks, the simple simulation modified by inputs approach still fits the bill.
  • There’s growing library and platform support. Unity, Godot, and even Valve’s Source engines all have plugins that support this model.

What’s so great about the model then?

  • Low bandwidth—you only need to send the initial state and changes to that state. That’s pretty powerful right there. The variance in networks especially with the emerging nations becoming a huge consumer of games means this is super important.
  • Best player experience—in many cases, it means that clients can run forward without latency between player input and response. Of course, you need to deal with conflicts when they occur, but this seems to be much easier than the alternatives.
  • Most consistent implementation—once you’ve got determinism handled, it’s the most consistent approach across platforms and devices.. Every device acts the same and gets the same results.

What are the downsides? The process of rolling back and re-calculating the game state can be CPU heavy. Depending on your approach you may have to calculate many frames of change quickly based on the new input. However, this is why it’s now the right choice. Devices have reached a point where CPUs are extremely overpowered for what they’re trying to achieve in games - so there’s room to have a smart and utility based network model.

Of course, if you’re building a network model for a specific game, there are many tricks and game-specific approaches you could take.

If, however, you're building a library/framework that supports many types of games in many different environments and on different devices, predict-rollback is the right choice for now and the future.

Want to learn more about our approach or simply want to discuss the content of this article? Stop by our Discord and let’s chat!

Subscribe to our newsletter for more game dev blog posts
We'll share your email with Substack
Substack's embed form isn't very pretty, so we made our own. But we need to let you know we'll subscribe you on your behalf. Thanks in advance!