You are on page 1of 17

Augmented Reality Interior designing Application:

1. Abstract:
In today’s world, technology is developing gradually so is the urge of people to surround
themselves with the latest tech gadgets. To accomplish this rate of demand, AR (Augmented
Reality) is a mixture of real and virtual reality and is known for its splendid work in interior
designing and making it easy for people to shop. Nowadays, it is facilitating a lot by creating an
atmosphere where people can visualize the interior design models in their physical spaces
virtually in real-world at their comfort zone by rotating, dragging, and zoom in/out the interior
designing models. All this can be done with the help of Marker-less based Augmented Reality
and Google ARCore as it is a developer platform for Augmented Reality design and deployment.
With this framework, people can sense the real environment. ARCore provides three significant
features which help in merging the real world with virtual i.e. motion tracking, light estimation,
and environmental understanding. But not all applications serve in the same way, many other
applications support the feature of augmented reality that is of no good for people to cherish for
name SayDuck. With the phenomenal increase in the rate of technology a considerable gap was
noticed, people tend to lack a fast virtual experience of showing 3D models through the camera
by sitting at their comfort zones and face many problems in judging interior design models that
how it will look in their room. ARCore framework is the solution for it in interior designing
applications for purchasing desire items after looking virtually it in their real environment.

2. Introduction:
Interior designing arrangement in people’s physical space is also a tedious work if there's an
excessive amount of interior items to be placed within the area. People can either draw up their
physical space to design their room, on paper, or use computer applications that assist people or
they'll just design their room immediately to figure out how it is and fits within the area. People
are busy with their work thus limiting their time to travel to varied stores to shop different
interiors for his or her everyday use. It's difficult to meet the customer’s contentment to brighten
their room without an imaginary view of how the place would seem. Smartphones have become
a vital tool in our lifestyle. Numerous mobile applications are developed to make people’s life
easy. within the field of shopping, for example, choosing the proper piece of interior design that
matches in a very room or perhaps matches the opposite existing interior design may be a tedious
task. Therefore, there's a desire for an application that helps users to create the correct decision
before purchasing items, especially when some stores don't have a refund, return, or exchange
policy.

Such problems are unbearable by customers, so as technology is gradually increasing people


demands an improved application. Using Augmented Reality in interior designing applications
will help customers in purchasing those items just by sitting in their comfort zone. The concept
of AR to assist people to view the products within their room without actually placing it in the
area, that helps users to design their interior by simple actions like rotating, dragging and zoom
in-out, etc. this may greatly minimize the user’s time and energy. Augmented reality that lets the
user get into virtual interior designing in a user’s real home structure before buying. From this, a
user is ready to choose interior design objects a lot easier. It'll not be necessary to travel shopping
and long looking trying to find the big user’s need or use a measuring tape to find out whether or
not the furniture design would slot in the customer’s room or not.

To make all this happen in an application ARCore framework is used and many people have
used this framework in their application to provide ease to the customer. Marker-less based AR
applications are used for the interior designing application, as Marker-less AR applications
recognize objects that were not predefined. The application recognizes different features,
patterns, colors, etc. As there are no predefined image targets on the app. During runtime, the
app has to analyze the different variables in the camera frame to trigger AR actions, including
motion tracking, light estimation, and environmental understanding. Environment plays a major
role in the system architecture. Accurate functioning of camera, sensors, types of software API’s
working together makes the system capable of augmenting virtual 3D objects on the display
screen. ARCore is Google’s platform for building augmented reality experiences. Using different
APIs, ARCore enables your phone to sense its environment, understand the world, and interact
with information.

All the AR interior designing applications developer applies these techniques and algorithms in
their applications. But the considerable drawback faced here is, their application didn’t satisfy
the demand of people. Many AR supported applications lack in processing speed, slow runtime
3D model view, shortage in product inventory. These issues weaken customer trust, thus making
them move towards other applications considerably better than them.

To solve this issue, the ARCore framework is used in the applications to fix slow processing and
to view the 3D model, as ARCore provides accuracy in viewing the 3D model in the real world
virtually.

3. Related Work:
Creating practical interactive AR applications may be a complicated task therefore the
structural complexity of AR content models and therefore the diversity aspects that has
got to be taken under consideration within the content creation process. To make an easy-
to-use, configurable AR system, it's important to settle on appropriate methods of user
interaction with the important and virtual environment. it's necessary that the interaction
methods are intuitive and user-friendly.

Traditionally, interior designers have relied on verbal descriptions, 3D renderings, or in-


person tours of physicals how rooms to speak their vision to clients. These
representations often lack critical details and are costly and slow to iterate upon. AR
interior design tool creates a chance to create higher fidelity representations that may give
customers crucial spatial insight, like depth perception or varying viewing perspectives.

IKEA launched an AR app in 2017, allowing users to position furniture from the IKEA
catalog into their rooms using an iPhone camera. Several e-commerce retailers, including
Wayfair and Chairish, have released similar products. Some professional architects have
experimented with using AR in their design process.

The AR feature of Houzz’sapplication resembles Amazon AR view and IKEA Place


quite a lot. The working idea is similar. The user can find furniture and home accessories
from the application’s catalogue and in selected cases view them in their room if the user
so desires. The feature utilizes the smart device’s camera to place the products into the
camera view similarly to the other applications.

Amazon AR view utilizes the smart device’s camera to create the objects on to the
camera view the similar way the IKEA Place application does. When going through
application’s catalogue some of the furniture and other items have a feature called View
in my room. The user can then select the feature to see how the chosen object looks in
their room. By choosing this feature, and pointing the smart device camera to the room, it
is possible to place the product into wanted spot and see how it would look or fit.
(Amazon 2019.)

The use of Augmented Reality feature in applications aids the users. Customers that get
to try products before buying them will be improbable to return the item after purchasing
it because they will be sure of their purchase in the first position. That means fewer
returns, less money lost, and happier customers leaving better reviews on your online
store.

4. Methodology:
ARCore is doing two things: tracking the position of the mobile device because it moves, and
building its understanding of the important world. ARCore’s motion tracking technology uses
the phone’s camera to spot interesting points, called features, and tracks how those points give
way to time. With a mixture of the movement of those points and readings from the phone’s
inertial sensors, ARCore determines both the position and orientation of the phone because it
moves through space. additionally, to identifying key points, Marker-less based AR can detect
flat surfaces, sort of a table or the ground, and may also estimate the typical lighting within the
area around it. These capabilities combine to enable ARCore to make its understanding of the
globe around it. ARCore’s understanding of the real world helps you to place objects,
annotations, or other information in a very way that integrates seamlessly with the real world.
ARCore enhances our work through its enormous capabilities. A simple workflow for
implementation is followed such that an AR utility enabled smartphone camera will sense the
environment, process the area captured, anchor the free space so that objects can be placed in it.
Selecting objects from an interactive menu is the next step. Scaling the object according to user
convenience, rotating the object across x, y and z-axis is a feature.

ARCore package com.google.ar. core. * provides a way to stack objects on the device screen.

(designing the model)


Mark Seams

Mesh Creation
(Un-wrap the model for
texturing)

(giving desire color/texture)


Un-Wrap Mesh

Texture Creation
Placing the models

Final 3D model
4.1. Mesh Creation:

The desired interior designing model is created by connecting nodes with each other to create a
model. A mesh consists of triangles arranged in 3D space to create the impression of a solid
object. A triangle is defined by its three corner points or vertices. In the Mesh class, the vertices
are all stored in a single array and each triangle is specified using three integers that correspond
to indices of the vertex array. The triangles are also collected together into a single array of
integers; the integers are taken in groups of three from the start of this array, so elements 0, 1 and
2 define the first triangle, 3, 4 and 5 define the second, and so on.

4.1.1 Algorithm:
1) Initial Delaunay triangulation setup.
2) Mesh Generation.
 Boundary node insertion.
 Recovery of domain boundary.
 Interior classification.
 Interior node insertion.
3) Mesh Optimization.
 Initial Delaunay triangulation setup.
a. One or few simples (completely surrounding the domain)
with a priori fulfilled Delaunay criterion can be
degenerated.
b. 2D: typically, 2 triangles of square bounding box.
c. 3D: typically, 5 or 6 tetrahedrons of cubic bounding box.

Initial Delaunay triangulation.


4.1.2 Mesh Generation Equation:

Incremental point insertion algorithm - Delaunay Kernel-Bowyer (Watson) algorithm

T i+1=T i−C p + B p

th
P− (i +1 ) point from a convex hull S

Tj - Delaunay triangulation of first j points from a convex hull S

C p - Cavity, set of elements K from T i whose circumball contains P

B p−¿¿ ball, set of new elements formed by boundary facets of C p and P

4.2. Creation of Mark Seams and Unwarp Mesh:

A seam is where the ends of the image/cloth are sewn together i.e. we mark seams on our 3D
mesh according to which the mesh is cut. According to the seams marked, a mesh is unwrapped
for texture creation. Unwrapping is the method of unfolding a mesh to create a 2D texture that
fits in the 3D object. The UV unwrap tool is used to unwrap the faces of the object.

4.2.1 Creation of Mark Seams:

Cutting of the mesh model according to the seams marked results in various flattened-out pieces
of the mesh which makes the process of texture creation easier. Seams are simply the edges,
therefore, the model needs to be in edge select mode.

The images below depict a DP process to compute one optimal seam. Each square represents a
pixel, with the top-left value in red representing the energy value of that said pixel. The value in
black represents the cumulative sum of energies leading up to and including that pixel.

Algorithm Direction

4 3 5 2
1 4 3 5 2
1
3+1 2+1 5+3 2+2 3+2
=4 =3 =8 =4 =5
5 2 4 2 1

The top row has nothing above it, so the energies are the same as the source image.

Algorithm Direction

4 3 5 2
1 4 3 5 2
1
3+1 2+1 5+3 2+2 3+2
4 3 8 4 5
5 2 4 2 1
8 5 7 6 5

For each pixel in the rest of the rows, the energy is its own energy plus the minimal of the three energies
above. Repeat until the bottom is reached.

Algorithm Direction

4 3 5 2
1 4 3 5 2
1
3+1 2+1 5+3 2+2 3+2
4 3 8 4 5
5 2 4 2 1
8 5 7 6 5

For the lowest energies(blue) we have at the end, work back up the minimal to recover the seam with
minimal energy.

Algorithm explanation:

The energy calculation is trivially parallelized for simple functions. The calculation of the DP
array can also be parallelized with some inter process communication. However, the problem of
making multiple seams at the same time is harder for two reasons: the energy needs to be
regenerated for each removal for correctness and simply tracing back multiple seams can form
overlaps. The map holds a "nth seam" number for each pixel on the image, and can be used later
for size adjustment.

If one ignores both issues however, a greedy approximation for parallel seam carving is possible.
To do so, one starts with the minimum-energy pixel at one end, and keep choosing the minimum
energy path to the other end. The used pixels are marked so that they are not picked again. Local
seams can also be computed for smaller parts of the image in parallel for a good approximation.

4.2.2 Creation of Unwrap Mesh:

Unwrapping is the method of unfolding a mesh to create a 2D texture that fits in the 3D object.
The UV unwrap tool is used to unwrap the faces of the object. When a model is created as a
polygon mesh using a 3D modeler, UV coordinates (also known as texture coordinates) can be
generated for each vertex in the mesh. One way is for the 3D modeler to unfold the triangle mesh
at the seams, automatically laying out the triangles on a flat page. It can be used to provide how
the mesh fits best within an image. It is based on the faces that are connected within the seams.
Each face attributes to a unique area in the image without overlapping other faces. If the mesh is
a UV sphere, for example, the modeler might transform it into an equirectangular projection.
Once the model is unwrapped, the artist can paint a texture on each triangle individually, using
the unwrapped mesh as a template. When the scene is rendered, each triangle will map to the
appropriate texture from the "decal sheet".

For any point on the sphere, calculate, that being the unit vector from to the sphere's origin.
Assuming that the sphere's poles are aligned with the Y-axis, UV coordinates in the range can
then be calculated as follows:

arctan 2(d x , d z) arcsin ⁡( d y )


u=0.5+

, v=0.5−
π
.

4.3 Texture Creation:

A texture is just a standard bitmap image that is applied over the mesh surface. You can think of
a texture image as though it were printed on a rubber sheet that is stretched and pinned onto the
mesh at appropriate positions. The positioning of the texture is done with the 3D modeling
software that is used to create the mesh. Once the 3D shape of the interior designing in the 2D
image has been recovered in the view of the camera, the mesh is re-projected onto the image
plane. We then can apply the generated texture to the 3D shape and produce the final 3D model
which is ready to be rearranged using scaling and rotation.
In addition to the lighting, a model will also typically make use of texturing to create fine detail
on its surface. A texture is a bit like an image printed on a stretchable sheet of rubber. For each
mesh triangle, a triangular area of the texture image is defined and that the texture triangle is
stretched and “pinned” to fit the mesh triangle. To make this work, each vertex needs to store the
coordinates of the image position that will be pinned to it. These coordinates are two dimensional
and scaled to the 0..1 range (0 means the bottom/left of the image and 1 means the right/top). To
avoid confusing these coordinates with the Cartesian coordinates of the 3D world, they are
referred to as U and V rather than the more familiar X and Y, and so they are commonly called
UV coordinates.

4.4 Mesh Rendering:

The Mesh Renderer takes the geometry from the mesh filter and renders it at the position defined
by the object’s transform component. Once the texture has been applied, the UV maps are
wrapped together to get the 3D model. The Mesh Renderer, which combines the mesh data with
materials to render the object in the scene. Now the final 3D model is ready to be rearranged
using translation, scaling, and rotation. MeshRenderer[] in script shows the color of the mesh if
we remove it we can't the model then.

Once the simulation on a mesh is finished, we require a method for displaying the resulting
chemical concentrations as a texture. First, we need a smooth way of interpolating the chemical
concentrations across the surface. The chemical value can then be used as input to a function that
maps chemical concentration to color. We have chosen to let the chemical concentration at a
location be a weighted sum of the concentrations at mesh points that fall within a given radius of
the location. If the chemical concentration at a nearby mesh cell Q is v(Q), the value of an
arbitrary point P on the surface is:
¿
v( P) ∑ v ( Q ) w(|P−Q∨¿ s ¿ ¿
Q nearP ∑ w(|P−Q∨¿ s ¿
Q near P

4.5 Final 3D Model:

To show models on the ground is done by the Controller object who after detecting the ground
shows models on the ground. When users select the model at the backend “selected prefab” will
change the status of the selected model. Changing the color of the models “MeshRenderer” will
help in that.

H(b)
… Backdrop

S0(b) S0
S S(b)

Ground b
Algorithm flow: input 3D model with user-specified backdrop and ground planes (a), the visible
PLS consisting of Cartesian grid faces (b), a modified, popup-realizable PLS, and its layout (c),
and the final PLS and its layout after geometric refinement (d). S0 (blue) is the visible PLS, S
(red) is our desired popup realizable PLS, b is a base face, and H (b) (stairs between the dotted
lines) are the candidate face of b , which include S0(b) and S(b)

Lying respectively on S0 and S .

T1: Data processing

3D sensor 3D point Pass-through Voxelization


cloud filter

Object Remove Ground Normal


clusters ground segmentation estimation
T3: Object detection and T2: Ground plane
clustering segmentation
Overview of the proposed ground plane detection algorithm

4.5.1 Translation, Rotation, Scaling:

With the help of the “Lean Plugins” rotation, translation, and scaling of the model will be done.

 Translation:
In translation mode, the object can be displaced by swiping. For this purpose, a Lean Translate
plugin is used. The translation process is used to move the object in the direction of axis x, y, z
of (dx, dy, dz) transformation matrix of 2-dimensional translational processes by adding the
value of the following translation matrix z:

| |
1 0 0 dx
0 1 0 dy
T=
0 0 1 dz
0 0 01
 Rotation:

In the rotation mode, the object rotates by dragging as making a circle around the center.
For this purpose, a Lean rotate plugin is used. For the 3-dimensional rotation process,
there are three kinds of rotation, i.e. a rotation axis X, Y-axis rotation, and rotation on the
axis Z. The difference of each rotation matrix is laying on the value of cos (a) and sin (a).

X-axis Rotation Y-axis Rotation

| || |
0 0 0 0 cos ⁡(a) 0 −sin ⁡(a) 0
0 cos ⁡(a) −sin ⁡(a) 0
Rx= Ry= 0 0 0 0
0 sin ⁡( a) cos ⁡(a) 0 sin ⁡(a) 0 cos ⁡( a) 0
0 0 0 1 0 0 0 1

Z-axis Rotation

| |
cos ⁡(a) −sin ⁡( a) 0 0
sin ⁡(a) cos (a) 0 0
Rz=
0 0 0 0
0 0 0 1
 Scaling:

In scaling mode, the object expands or reduces by the operations of zoom-in or zoom-out
using two fingers. It is counter-intuitive to think of “scaling” a point, rather than an
object. So let’s take a rectangle centered at the origin. We want to zoom in 2x; by
intuition, we will multiply the coordinates of each point by 2 here.

(-4,2) (4,2)

(-2,1) (2,1)
(-2, -1) (2, -1)

(-4, -2) (4, -2)

The inner rectangle is scaled twice to produce the outer rectangle

And it works. However, this doesn’t work in the case of an object that isn’t centered at
the origin.

It will also translate the whole object away from the origin.

(2,4) (8,4)

(1,2) (2,2) (4,2) (8,2)

(1,1) (4,1)

The smaller rectangle is scaled directly 2x; the result is shifted to the top-right.

5 Results:
The results produced are as follows:
1) A GUI that allows users to use select, place, remove and modify 3d models in an
interactive way by using different touch gestures, at any time, more than 90% of
the screen is filled by camera stream and rendered models.
2) A solid backend that handles the app’s core functionality such as detecting the
gestures, modifying rendered 3d models, loading 3d models from local storage.
Scenario:
First run the Augmented Reality application on an android AR supported devices to view models
in physical spaces. After the application UI is showing on the device screen, start the application.
After that it will asked for camera access to open camera for viewing AR feature, then after
getting access Augmented reality apps use the device's sensors to move a virtual camera through
a scene in real-time based on the device's movement and create a mesh on the ground after
detecting the plane by proper light estimation on the ground, this created mesh will be shown on
device camera virtually in real-world. After the detection of ground then user can select interior
design models and also can change color by rotating, dragging and zoom-in/zoom-out the
models.

Following is the UI of application where AR feature in embedded: -

 Start application:

To run the application user, start the application by clicking on “start app”.

 Ground detection:

After giving camera access plane discovery detect the ground and point cloud creating
dots on the screen which join with the lines and creating a mesh on the ground plane.

 Select models:
Select the desired interior design from the list view. Tap on the screen to the target place
to view the models in the physical space.

 Change color:

Before changing color After changing color

By selecting “change color” different color options are displayed on the screen to apply
on different interior design model and view it in the physical space.

 Rotate models:

User can rotate models on the screen easily at 180 degrees.

 Drag models:
Before dragging the model After dragging the model

User can drag model, to left/right/up and down.

 Zoom-in models:

User can pinch in the model and can make it size small to adjust the model.

 Zoom-out models:

User can pinch out the model and can make it size big to adjust the model.

6 Conclusion:
Augmented reality is becoming more and more common in real world. In this
research paper, we examined how a marker-less AR used for online interior
designing. AR has its unique advantages and is incredibly good at tackling especially
visualization problems. In an AR environment, buying furniture may be convenient
and straight forward while saving costs by completely lowering the chance of
product returns. the advantage of this application is that it reduces the value and
provides the multimedia augmentation of high vivid simulations for users in real-
time. additionally, it allows the customer to grasp the concept of the project and
thus enable them to realize the customized requirements and better design
affection.

7 Future Work:
We are attempting to integrate photogrammetry to our existing platform which can allow
us to reconstruct a 3d model of furniture from pictures. As of now, the user can only
visualize the 3d models that are within the local storage, we might wish to expand this
functionality. We shall connect the app to a cloud repository from which a user could
browse furniture and import it during runtime. additionally, we are aiming to
revolutionize the way we share things by using photogrammetry principles.
REFRENCES

1- link.springer.com
2- stanford.edu. Spring2019
3- theseus.fi
4- mech.fsv.cvut.cz
5- en.wikipedia.org
6- en.wikipedia.org
7- apps.dtic.mil
8- researchgate.net
9- iopscience.iop.org
10- medium.com
11- dzone.com
12- developers.google.com
13- ijasret.com VolumeArticles
14- irjet.net

You might also like