Skip to main content

How to start working with Structure Sensor. FieldXR project part 1

About Us
Published by jetbi
17 March 2021

There are lots of Field Service Management solutions nowadays. Field technicians need to collect, analyze information from field and solve the problem in the shortest time possible. The FieldXR project is a brand-new FSL addon that surpasses your expectations, handling the tasks above and providing broad visualization features.

The FieldXR project is based on data acquisition (2D, 3D data) automation, sending the received data to the server, getting extended information about the object, and displaying the processed information on the device's screen with the ability to engage with it interactively. With the help of FieldXR technicians can find the object’s description, model, instructions and year issued, etc.

What do field workers need for that? That's simple. Just a device with the installed solution FieldXR and Internet access.

The main objectives:

- gathering of 3D information about the scanned existing object; 

- recognition of a scanned object’s among the stored objects in the database;

- display the identified object in the coordinates of the scanned fragment on the device screen by using AR technology.


1 Pic. Preview App FieldXR


Technical requirements for the project:

- an iOS app(iPad);

- gathering 3D object data in real-time;

- identification of the collected 3D information about the object;

- high performance.

You need a device that allows you to collect 3D data about objects in real-time, which is compatible with the iPad, and include an SDK 

In our case, the most proper device became Structure Sensor of Occipital, Inc. The company delivers device with a stable, easy-to-use platform for creating Structure SDK applications.


2 Pic. The appearance of the structure Sensor (model ST01) connected to the iPad


Challenges and solution


  1. Structure Sensor presentation;
  2. Preparation of necessary equipment and apps: SDK Structure Sensor, Scanner.xcodeproj;
  3. The Xcode IDE (Integrated development environment for MacOS and IOS) configuration, build the example of the Scanner.xcodeproj project;
  4. Learning the SDK Structure Sensor based on the Scanner.xcodeproj.


1. Structure Sensor presentation

Connect the Structure Sensor to your iPad and launch the Scanner app. If you have difficulties use the following instructions: How to connect with Structure Sensor.


3 Pic. Displaying the data from iPad camera and Structure Sensor



4 Pic. The step of receiving the 3D data of Mesh



5 Pic. The view with  collected 3D data of Mesh 


The Scanner app allows recording the scanned object’s 3D data and sends it as email attachment. Also, you can use the application to create the 3D objects database.


2.  Source data of SDK Structure Sensor and Scanner.xcodeproj

On the site of the Structure Sensor app download SDK Structure Sensor:


6 Pic. Download SDK Structure Sensor


On the next image, you can find all the related package Structure SDK.


7 Pic. Structure SDK


We are interested in Scanner.xcodeproj project folder. It will compile and run on an iPad with the connected Structure Sensor. As for the next step, you need to set up the IDE.


3. Configuration of IDE Xcode and building of Scanner.xcodeproj

It is difficult to imagine developing on iOS without a Mac. In the next steps we will use Mac Mini (OS Catalina 10.15.6).


8 Pic. Mac Mini (OS Catalina 10.15.6)


Install the Xcode IDE app from the App Store. I have installed the latest version of Xcode 12.2. 

After setting up Xcode, open the Scanner.xcodeproj.


9 Pic. Path to Scanner.xcodeproj



10 Pic. Opened Scanner.xcodeproj


Let's prepare the project for building and running on the device.

In the project settings, you should specify the version of iOS Pad has:


 11 Pic. Settings of Scanner.xcodeproj


If the iPad is connected to the Mac through Lightning cable, Xcode will detect the iPad automatically. However, the mini-USB iPad is already occupied by the Structure Sensor, we will use the Xcode settings to connect with the Mac through the network.

Go to the Xcode and open Devices and Simulators (cmd+Shift+2):


12 Pic. How to open Devices and Simulators


In Devices, select iPad and check the box " Connect via network”:


13 Pic. Check " Connect via network”


Now, our project is ready to build and run.

When connecting with Xcode for the first time, iPad will ask: should he trust the new device? Let it. 

On iPad, go to Settings -> General -> Device Management: follow the instructions to select your developer account.

After a successful build, a window will appear in Xcode to enter the password.


14 Pic. Enter the password


If authentication is successful, the debugging project’s version will upload to the iPad Scanner.xcodeproj:


15 Pic. Launch screen Scanner.xcodeproj on iPad


Now you can test the project: create an object’s 3D scan, save it, and so on. 


4. Learning the SDK Structure Sensor based on the Scanner.xcodeproj code 

The Structure SDK for iOS is written in Objective-C,  therefore, developers may have some difficulties while setting up the application and through the workflow at all.

Information to be learned:

  1. Syntax and structure of the Xcode project in Objective-C;
  2. The structure of the Framework SDK Structure Sensor;
  3. The Scanner.xcodeproj code (the structure, the call stack).


Syntax Objective-C. Xcode project’s structure

At first acquaintance, the syntax of Objective-C doesn't seem to be intuitive and friendly. It takes time dealing with the format of declaring classes, methods, and messages. 

In the book, Matt Neuburg-Programming for iOS 7. Objective-C basics in Chapter 6, you can learn more about the "anatomy of the Xcode project". 

The project structure, with enabling frameworks, looks like this:


16 Pic. Structure of project Xcode


For convenience, the project is divided into groups (Supporting Files, Frameworks, FieldXR, Products). Files with the extension .h - header files; .mm - source files (you can use both Objective-C and C++). The call sequence is performed as follows:


17 Pic. App execution


Structure Framework of SDK Structure Sensor

According to the SDK Structure Sensor documentation (highlighted by color is worth paying attention to), the structure looks like this:


18 Pic. Framework structure


 Capture Session:

  • STCaptureSession  <class>  - creating a connection between  application-specific delegate class and the Structure Sensor / Apple LiDAR / iOS color camera;
  • STCaptureSessionDelegate  <protocol>  - the interface that application-specific class must implement in order to receive capture session callbacks;
  • STOccFileWriter  <class>  - manages writing incoming sensor data from a capture session to an OCC file;
  • STDepthFrame  <class>  - depth data;
  • STColorFrame  <class>  -  color frame (only resolutions used: 640x480, 2048x1536, 2592x1936, 3264x2448 and 4032x3024).


SLAM Engine:

  • STMapper  <class>  - recreating a 3D model of a scene;
  • STTracker  <class>  - tracks the 3D position of the structure sensor (how the camera moves and rotates);
  • STScene  <class>  -  contains information about the sensor and the reconstructed mesh;
  • STMesh  <class>  - data about mesh vertices and faces;
  • STColorizer  <class> - colors the mesh. 


5. Structure and call stack Scanner.xcodeproj


19 Pic. Structure of Scanner.xcodeproj


To parse the code of the Scanner project.xcodeproj I used:

  • to debug mode with breakpoints and NSLog enabled;
  • developer comments on the project code;
  • analysis of the application with disabled (commented out) parts of the code.

Below are the results of the code analysis.


All the logic of interaction between App and Structure Sensor is stored in:

  • ViewController  <class> - configures the user interface, updates information for the App window, processes user actions(buttons, gestures), interacts with other controllers(MeshViewController, ViewpointController), displays notifications about the sensor status;
  • ViewController+CaptureSession  <category> - checks the connection to the sensor, sets the resolution for the camera's color frames, sets the sensor and camera lens configurations, starts monitoring, and receives capture session callbacks;
  • ViewController+SLAM  <category> - contains information about the sensor and the reconstructed grid, initializes the scene, tracks the 3D position of the sensor, initializes the cube plane, selects keyframes, and converts depth data to RGB values;
  • ViewController+OpenGL  <category> - responsible for rendering images displayed by the color camera in the App window using OpenGL.


Auxiliary classes:

  • CustomUIKitStyles  <class> - customizes the appearance of user interface elements;
  • MeshViewController  <class> - responsible for configuring user gestures (when interacting with an instance of the App window), for displaying the scanned object;
  • MeshRenderer  <class> - renders the collected 3D data;
  • CustomShaders  <class> - configures custom shaders;
  • ViewpointController  <class> - sets the camera and projection matrix depending on the user's gestures, for displaying Mesh (Pic.Window for viewing the collected 3D data of Mesh);
  • EAGLView  <class> - creates the surface on which the OpenGL scene is rendered;
  • ViewUtilities  <class> - generates an RGBA color image;
  • CalibrationOverlay  <class> - responsible for checking and, calibrating the sensor if it’s necessary;
  • SettingsPopupView  <class> - responsible for pre-setting capture for configurable sensor parameters. 

Here you can find in-depth information about the collection of 3D data by the sensor.


The full picture of interaction with the sensor looks like this:

  • Initialization & sensor start streaming

Method of the ViewController class: - (void)setupCaptureSession;

Create a capture session, set the session config, and start monitoring.

  • Application received streaming output 

Method of the ViewController class: 

- (void)captureSession:(STCaptureSession*) captureSession

didOutputSample:(NSDictionary*)sample type:(STCaptureSessionSampleType)type)

Notifies the delegate of data output from the capture session(depth frame, infrared frame, visible frame type, set of synchronized depth, color, or infrared frames, and so on). Next, the pixel image with depth is converted to an RGB image (color Frame+depthFrame)

  • Processing for your specific usage

Interaction with the APP UI.

- (void)adjustVolumeSize:(GLKVector3)volumeSize - adjust the size of the scan area cube 

- (void)enterScanningState - start scanning

- (void)enterViewingState - get 3D data of the scanned object 

Methods of the MeshViewController class allow you to view an object in a new App window( 3 viewing modes), as well as

- (void)emailMesh - send a file by email, having previously written the mesh to the file using the STMesh class method

- (BOOL)writeToFile:(NSString *)filePath options:(NSDictionary *)options error:(NSError* __autoreleasing *)error

  • Stop streaming

Class method ViewController:

- (void)captureSession:(STCaptureSession*)captureSession didStopAVCaptureSession:(AVCaptureSession*)avCaptureSession



The collected and learned information about the Structure Sensor’s work  and framework allows to start developing a JET BI project:

  1.  Create a project in Xcode (in Obj-C)
  2. Connect the framework (connection instructions
  3. Embed following  sequence: “Initialization -> Sensor start streaming -> Application receives streaming output -> Processing for your specific usage -> Stop streaming”, as well as the Scanner.xcodeproj code implements a project skeleton
  4. Add new functionality to the project using knowledge, documentation, and a developer forum 


Conclusions and future work

When I first got into my iPad with a connected Structure Sensor, I could hardly resist scanning every object in the room and, of course, creating a 3D version of myself. :) 

The process of interacting with a physical device ( data connection between the sensor and iPad; charging the sensor battery) and with the Scanner app (scanning an object, warning text, viewing a 3D model, and saving it) is simple and intuitive. It is also not difficult to download and build an example project with the framework already connected. 

The most time-consuming process was studying the project code and establishing relationships between classes and their features.

Soon, gained knowledge will form the basis of the JET BI project.



- SDK Structure Sensor (documentation, developer forum);

- Xcode

- additional project for working with the sensor, instructions for creating it, code (Swift);

- where to find information on Objective-C:

Quick Help Xcode, literature, developer sites, video tutorials;


About author

Volha Dzeranchuk - developer C++. Education: Bachelor degree (year of graduation: 2012), BSU, Faculty of Mechanics, and Mathematics. Experience in the IT field for more than 8 years. Development of GUI applications for Windows, creation of DLLs, project support (C++, UE4, SQL/QT, Visual Studio/VR). More than 3 years of experience in teaching IT disciplines.


Olga Deranchuk
Question to the expert

We have available resources to start working on your project within 5 business days

1 UX Designer


1 Admin


2 QA engineers


1 Consultant


Steps following request submission



After receiving your request, we analyze it and we offer free online meeting slots (via email) so that we can discuss your needs in as much detail as possible


We begin gathering all necessary requirements to create comprehensive estimates, including timelines, resource allocations, risk assessments, and underlying assumptions.


Once all preparations are in place, we will initiate the project and move forward with the planned tasks