You are on page 1of 30

Unit – 5

Introduction of MIT App Inventor :


MIT App stands for Massachusetts Institute of Technology.
MIT App Inventor is an intuitive, visual programming environment that allows everyone even
children to build fully functional apps for smartphones and tablets. Those new to MIT App
Inventor can have a simple first app up and running in less than 30 minutes.

App Inventor is a visual programming enviroment used to facilitate the development of


Android applications maintained by MIT. It uses block-based coding to allow anyone not
familiar with programming to easily develop functional applications in a short amount of
time. In order to use App Inventor, you must install the MIT App Inventor Tools to
connect an Android device to the development enviroment or to use an emulator.

Who invented the MIT App Inventor?


Hal Abelson conceived the idea of App Inventor while on sabbatical at Google Labs in 2007.
Abelson had previously taught a at MIT on mobile programming, but at the time mobile app
development required significant investment on the part of developers and development
environments

What Is MIT App Inventor?


MIT App Inventor is a programming learning tool that is aimed at total beginners but
also novices wishing to advance further. It came about as a collaboration between
Google and MIT. It uses coding to create real-world usable apps for Android and iOS
devices, which students can play.

MIT App Inventor uses drag-and-drop style code building blocks, similar to those used
by the Scratch coding language. This makes it easy to pick up from a young age and also
helps take the otherwise potentially overwhelming complexity out of getting started.

The use of bright colors, clear buttons, and plenty of tutorial guidance all add up to a
tool that helps get even the more tech-troubled learners get up and running. That
includes students being guided by a teacher in class as well as those wishing to get
started, alone, from home.
Advantages of MIT App Inventor
• Access to most of the phone's functionality: phone calls, SMS texting, sensors for
location, orientation, and acceleration, text-to-speech and speech recognition, sound,
video.
• The ability to invoke other apps, with the ActivityStarter component
• Programming control just as with a textual language. There are blocks for
conditionals (if, ifelse), foreach, and while, and a fairly comprehensive list of math
and logic blocks.
• Database access, both on the device and on the web. So you can save data
persistently, and with a web database share data amongst phones.
• Access to web information sources (APIs)-- you can bring in data from Facebook,
Amazon, etc.

Disadvantages of MIT App Inventor


• Limited UIs. The user interface builder has improved but is still a bit buggy and
limited, so you can't build any user interface. For instance, you can't create apps with
multiple screens and handling orientation changes has some glitches. These
problems are not fundamental to the design of App Inventor and will soon be fixed.
• Limited Access to the device. There are not yet components for all the data and
functionality of the phone. For instance, you can't save and retrieve files from the file
system and you have only limited access to the contact list (e.g., you cannot create
groups).
• Limited Access to Web. You can only access APIs that follow a particular prototocol
(App-Inventor-compatible APIs). So if you want to get data from the web, you'll need
to program or have a programmer create an App-Inventor-Compliant API that wraps
an existing API.
• No polymorphic components. Function blocks are tied to specific components, so
there is no way to call functions on a generic component. For instance, if you create
a procedure MoveXY, it has to be tied to a specific image sprite, not a general image
sprite.
• Limited access to the Android Market. The apps (.apk files) generated by App
Inventor lack the required configuration for direct inclusion in the market. However,
there is now a workaround for market publication.

The components of MIT APP INVENTOR :


This document describes the components you can use in App Inventor to build your apps.
Each component can have methods, events, and properties. Most properties can be changed
by apps — these properties have blocks you can use to get and set the values. Some
properties can’t be changed by apps — these only have blocks you can use to get the values,
not set them. Each item is annotated with additional text to indicate the type of the item,
whether it is read-only, and whether it only exists in the designer or blocks.

1 User Interface , 4 Drawing&Animation, 7 Sensor, 10 Connectivity ,


2 Layout, 5 Map, 8 Social, 11 LEGO® MINDSTORMS®
3 Media, 6 chart, 9 Storage, 12 Experimental

How does Applications Coding Works in MIT App Inventor ?

MIT App Inventor begins with a tutorial that allows students to be guided into the
process of basic coding without the need for any other help. As long as the student is
able to read and understand basic technical guidance, they should be able to begin
code building right away.

Students can use their own phones or tablets to test the apps, creating code that uses
the device's hardware. For example, a student could create a program that has an
action carried out, such as turning on the phone's light when the devices is shaken by
the person holding it.

Students are able to pick from a wide selection of actions, as blocks, and drag each one
into a timeline that allows each action to be carried out on the device. This helps to
teach the process-based way that coding works.

If the phone is setup and connected, it can be synchronized in real-time. This means
that students can build and then test and see the results right away on their own device.
As such, more than one device is needed for the most ease when building and testing
live.

Crucially, the guidance isn't too much, so students will need to try things and learn by
Programing Basics and Dialog
Steps to use MIT app inventor:
Step 1: Open a Gmail account in case you don’t have one.

Step 2: Open the link https://appinventor.mit.edu/ and log in to your Gmail


account.

Step 3: You need to install the App Inventor Companion App (MIT AI2 Companion)
on our mobile device that helps in live testing of our application.

Step 4: We need to connect both mobile devices & laptops/desktop should be


connected to the same WiFi network.

Step 5: To start the app-building click on “Start New Project”


Step 6: To connect your mobile device, choose “Connect” and “AI
Companion” from the top menu.

Step 7: Now to connect the MIT AI2 App on your device and
desktop/laptop scan the QR code or type the 6 digit code which is
appearing on your PC screen.

Step 8: Now you can see the app you are building on your device.
:MORE PROGRAMMING BASICS:

Alarm Clock Applications :

1.DESIGNER PROPERTIRES OF THE CLOCK

1.1Timer Always Fires

This is a Boolean property, i.e it accepts only true or false.


If you wish the clock to run even if the application is not the active screen, i.e the
user is using some other application.
Can this be set using blocks? Yes!
By default: True
Possible uses:

• To run a timed game


• While the user is giving a quiz

1.2Timer Enabled

This is a boolean property, i.e it accepts only true or false.


When you want the clock to run it’s timer, then you have to set it to true.
Can this be set using blocks? Yes!
Note: If the property is set to true in the designer, then the clock fill automatically fire
as soon as the screen initizalizes.
By default: True

1.3 Timer Interval

This property accepts a number.


With this property we can specify the interval (in milliseconds) between each time
the click.timer event is called.
Can this be set using blocks? Yes!
Note: The clock starts to leg when the interval is set to a very small value such as 10
milliseconds.
By default: 1000 milliseconds (1 second)
2 .DESCRIPTION :
1 .I used MIT APP INVENTOR to explore my alarm clock

2 . Firstly I set up new project and named it ALARMCLOCK

3 .To set up an alarm we need to have an extension.. For that search mit app
inventor extension

4. Then Click PURAVIDAAPPS and you can see the alarm extension

5 At the bottom of the page u got to have an extension with .aix file link to
download

6. Download the extension

7. Again get back to the project and you can see extension option at the
bottom of palette (left side)

8. Drag the downloaded file on to the screen and then import it

9.Now we have the alarm extension.. drag and drop it on to the screen ( it’s
invisible )

10.To set an alarm clock :

Drag and drop the vertical arrangement on to screen and set its height and
width to full parent (you can choose your own background)

Now add a text box to provide message for alarm. Drag and drop horizontal
arrangement and set its width to full parent..and change its allignment to
centre

Drag and drop two text boxes to provide both hours and minutes to set
alarm and change width to 20% ( you can choose your own percent)
Drag and drop a button to set alarm and edit the text to set alarm. Usually
the setup is as shown in the below picture

Now we can proceed to blocks session

• When button 1 is clicked we need to fix the alarm

• We need to set :

1. Message

2. Hour

3. Minute

• Now provide :

1. Text box 3 to message

2. Text box 1 to hour

3. Text box 2 to minutes

The setup in blocks session is as shown below:


We are ready with our alarm now

• Switch on to designer and build the app

• We see the output of alarm as shown below


Finally , It’s time to set the required alarm

• I set up my alarm to 18:30 with good morning message

• I finally got my alarm at 18:30 as shown below

Audio and Video( Media)


1. Camera

Use a camera component to take a picture on the phone.


Camera is a non-visible component that takes a picture using the device’s camera.
After the picture is taken, the path to the file on the phone containing the picture is
available as an argument to the AfterPicture event. The path can be used, for
example, as the Picture property of an Image component.
Properties
None
Events
AfterPicture(image)
Called after the picture is taken. The text argument image is the path that can be used to
locate the image on the phone.
Methods
TakePicture()

Takes a picture, then raises the AfterPicture event.

2. Player
Multimedia component that plays audio and controls phone vibration. The name of a
multimedia file is specified in the Source property, which can be set in the Designer
or in the Blocks Editor. The length of time for a vibration is specified in the Blocks
Editor in milliseconds (thousandths of a second).
For supported audio formats, see Android Supported Media Formats.
This component is best for long sound files, such as songs, while
the Sound component is more efficient for short files, such as sound effects.
Properties

IsPlaying

Reports whether the media is playing.


Loop
If true, the Player will loop when it plays. Setting Loop while the player is playing will
affect the current playing.
PlayOnlyInForeground
If true, the Player will pause playing when leaving the current screen; if false (default
option), the Player continues playing whenever the current screen is displaying or
not.
Source
Sets the audio source.
Volume
Sets the volume property to a number between 0 and 100.
Events

Completed()

Indicates that the media has reached the end


OtherPlayerStarted()
This event is signaled when another player has started (and the current player is
playing or paused, but not stopped).

Methods
Pause()
Suspends playing the media if it is playing.
Start()
Plays the media. If it was previously paused, the playing is resumed. If it was
previously stopped, it starts from the beginning.
Stop()
Stops playing the media and seeks to the beginning of the song.
Vibrate(milliseconds)
Vibrates for specified number of milliseconds.

3. Sound:
A multimedia component that plays sound files and optionally vibrates for the
number of milliseconds (thousandths of a second) specified in the Blocks Editor.
The name of the sound file to play can be specified either in the Designer or in the
Blocks Editor.
For supported sound file formats, see Android Supported Media Formats.
This Sound component is best for short sound files, such as sound effects, while
the Player component is more efficient for longer sounds, such as songs.
Properties
MinimumInterval

Specifies the minimum interval required between calls to Play, in milliseconds. Once
the sound starts playing, all further Play calls will be ignored until the interval has
elapsed.
Source
The name of the sound file. Only certain formats are supported. See
http://developer.android.com/guide/appendix/media-formats.html.
Events None
Methods
Pause()

Pauses playing the sound if it is being played.


Play()
Plays the sound.
Resume()
Resumes playing the sound after a pause.
Stop()
Stops playing the sound if it is being played.
Vibrate(millisecs)
Vibrates for the specified number of milliseconds.
4.SoundRecorder
Multimedia component that records audio.
Properties
SavedRecording
Specifies the path to the file where the recording should be stored. If this property is
the empty string, then starting a recording will create a file in an appropriate location.
If the property is not the empty string, it should specify a complete path to a file in an
existing directory, including a file name with the extension .3gp.
Events
AfterSoundRecorded(sound)

Provides the location of the newly created sound.


StartedRecording()
Indicates that the recorder has started, and can be stopped.
StoppedRecording()
Indicates that the recorder has stopped, and can be started again.
Methods
Start()

Starts recording.
Stop()
Stops recording.

5. SpeechRecognizer

Use a SpeechRecognizer component to listen to the user speaking and convert the
spoken sound into text using the device’s speech recognition feature.
Properties
Language

Suggests the language to use for recognizing speech. An empty string (the default)
will use the system’s default language.
Language is specified using a language tag with an optional region suffix,
such as en or es-MX. The set of supported languages will vary by device.
Result

Returns the last text produced by the recognizer.


UseLegacy
If true, a separate dialog is used to recognize speech (the default). If false, speech is
recognized in the background and updates are received as it recognizes
words. AfterGettingText may get several calls with partial set to true . Once
sufficient time has elapsed since the last utterance, or StopListening is called, the
last string will be returned with partial set to false to indicate that it is the final
recognized string and no more data will be provided until recognition is again started.
See AfterGettingText for more details on partial speech recognition.
Events
AfterGettingText(result,partial)
Simple event to raise after the SpeechRecognizer has recognized speech.
If UseLegacy is true , then this event will only happen once at the very end of the
recognition. If UseLegacy is false , then this event will run multiple times as
the SpeechRecognizer incrementally recognizes speech. In this case, partial will
be true until the recognized speech has been finalized (e.g., the user has stopped
speaking), in which case partial will be false .
BeforeGettingText()
Simple event to raise when the SpeechRecognizer is invoked but before its activity is
started.
Methods
GetText()

Asks the user to speak, and converts the speech to text. Signals
the AfterGettingText event when the result is available.
Stop()
Function used to forcefully stop listening speech in cases where
SpeechRecognizer cannot stop automatically. This function works only when
the UseLegacy property is set to false .

6. TextToSpeech
The TextToSpeech component speaks a given text aloud. You can set the pitch and
the rate of speech.
You can also set a language by supplying a language code. This changes the
pronunciation of words, not the actual language spoken. For example, setting
the Language to French and speaking English text will sound like someone speaking
English (en) with a French accent.
You can also specify a country by supplying a Country code. This can affect the
pronunciation. For example, British English (GBR) will sound different from US
English (USA). Not every country code will affect every language.
The languages and countries available depend on the particular device, and can be
listed with the AvailableLanguages and AvailableCountries properties.
Properties
AvailableCountries

List of the country codes available on this device for use with TextToSpeech. Check
the Android developer documentation under supported languages to find the
meanings of these abbreviations.
AvailableLanguages
List of the languages available on this device for use with TextToSpeech. Check the
Android developer documentation under supported languages to find the meanings
of these abbreviations.
Country
Country code to use for speech generation. This can affect the pronunciation. For
example, British English (GBR) will sound different from US English (USA). Not every
country code will affect every language.
Language
Sets the language for TextToSpeech. This changes the way that words are
pronounced, not the actual language that is spoken. For example, setting the
language to French and speaking English text will sound like someone speaking
English with a French accent.
Pitch
Sets the speech pitch for the TextToSpeech.
The values should be between 0 and 2 where lower values lower the tone of
synthesized voice and greater values raise it.
The default value is 1.0 for normal pitch.
Result

Returns true if the text was successfully converted to speech, otherwise false .
SpeechRate
Sets the SpeechRate for TextToSpeech.
The values should be between 0 and 2 where lower values slow down the
pitch and greater values accelerate it.
The default value is 1.0 for normal speech rate.
Events
AfterSpeaking(result)

Event to raise after the message is spoken. The result will be true if the message is
spoken successfully, otherwise it will be false .
BeforeSpeaking()
Event to raise when Speak is invoked, before the message is spoken.
Methods
Speak(message) Speaks the given message.

7. VideoPlayer
A multimedia component capable of playing videos. When the application is run,
the VideoPlayer will be displayed as a rectangle on-screen. If the user touches the
rectangle, controls will appear to play/pause, skip ahead, and skip backward within
the video. The application can also control behavior by calling the Start, Pause,
and SeekTo methods.
Video files should be in 3GPP (.3gp) or MPEG-4 (.mp4) formats. For more details
about legal formats, see Android Supported Media Formats.
App Inventor only permits video files under 1 MB and limits the total size of an
application to 5 MB, not all of which is available for media (video, audio, and sound)
files. If your media files are too large, you may get errors when packaging or
installing your application, in which case you should reduce the number of media
files or their sizes. Most video editing software, such as Windows Movie Maker and
Apple iMovie, can help you decrease the size of videos by shortening them or re-
encoding the video into a more compact format.
You can also set the media source to a URL that points to a streaming video, but the
URL must point to the video file itself, not to a program that plays the video.
Properties
FullScreen

Sets whether the video should be shown in fullscreen or not.


Height
Specifies the component’s vertical height, measured in pixels.
HeightPercent
Specifies the VideoPlayer’s vertical height as a percentage of the Screen’s Height.
Source
Sets the “path” to the video. Usually, this will be the name of the video file, which
should be added in the Designer.
Visible
Specifies whether the VideoPlayer should be visible on the screen. Value is true if
the VideoPlayer is showing and false if hidden.
Volume
Sets the volume property to a number between 0 and 100. Values less than 0 will be
treated as 0, and values greater than 100 will be treated as 100.
Width
Specifies the component’s horizontal width, measured in pixels.
WidthPercent
Specifies the horizontal width of the VideoPlayer as a percentage of
the Screen’s Width.
Events
Completed()

Indicates that the video has reached the end


Methods
GetDuration()

Returns duration of the video in milliseconds.


Pause()
Pauses playback of the video. Playback can be resumed at the same location by
calling the Start method.
SeekTo(ms)
Seeks to the requested time (specified in milliseconds) in the video. If the video is
paused, the frame shown will not be updated by the seek. The player can jump only
to key frames in the video, so seeking to times that differ by short intervals may not
actually move to different frames.
Start()
Plays the media specified by the Source.
Stop()
Resets to start of video and pauses it if video was playing.

Drawing and Animation Applications:


Table of Contents:
• Ball
• Canvas
• ImageSprite

1.Ball
A round ‘sprite’ that can be placed on a Canvas, where it can react to touches and
drags, interact with other sprites (ImageSprites and other Balls) and the edge of
the Canvas, and move according to its property values.
For example, to have a Ball move 4 pixels toward the top of a Canvas every 500
milliseconds (half second), you would set the Speed property to 4 [pixels],
the Interval property to 500 [milliseconds], the Heading property to 90 [degrees], and
the Enabled property to true . These and its other properties can be changed at any
time.
The difference between a Ball and an ImageSprite is that the latter can get its
appearance from an image file, while a Ball’s appearance can only be changed by
varying its PaintColor and Radius properties.

Properties
Enabled

Controls whether the Ball moves when its speed is non-zero.


Heading
The Ball’s heading in degrees above the positive x-axis. Zero degrees is toward the
right of the screen; 90 degrees is toward the top of the screen.
Interval
The interval in milliseconds at which the Ball’s position is updated. For example, if
the Interval is 50 and the Speed is 10, then the Ball will move 10 pixels every 50
milliseconds.
OriginAtCenter
Whether the x- and y-coordinates should represent the center of the Ball ( true ) or its
left and top edges ( false ).
PaintColor
The color of the Ball.
Radius
The distance from the center of the Ball to its edge.
Speed
The speed at which the Ball moves. The Ball moves this many pixels
every Interval milliseconds if Enabled is true .
Visible
Sets whether sprite should be visible.
X
The horizontal coordinate of the Ball, increasing as the Ball moves right. If the
property OriginAtCenter is true, the coordinate is for the center of the Ball;
otherwise, it is for the leftmost point of the Ball.
Y
The vertical coordinate of the Ball, increasing as the Ball moves down. If the
property OriginAtCenter is true, the coordinate is for the center of
the Ball otherwise, it is for the uppermost point of the Ball.
Z
How the Ball should be layered relative to other Balls and ImageSprites, with higher-
numbered layers in front of lower-numbered layers.
Events
CollidedWith(other)

Event handler called when two enabled sprites (Balls or ImageSprites) collide. Note
that checking for collisions with a rotated ImageSprite currently checks against its
unrotated position. Therefore, collision checking will be inaccurate for tall narrow or
short wide sprites that are rotated.
Dragged(startX,startY,prevX,prevY,currentX,currentY)
Event handler for Dragged events. On all calls, the starting coordinates are where the
screen was first touched, and the “current” coordinates describe the endpoint of the
current line segment. On the first call within a given drag, the “previous” coordinates
are the same as the starting coordinates; subsequently, they are the “current”
coordinates from the prior call. Note that the Ball won’t actually move anywhere in
response to the Dragged event unless MoveTo is specifically called.
EdgeReached(edge)
Event handler called when the Ball reaches an edge of the screen. If Bounce is then
called with that edge, the sprite will appear to bounce off of the edge it reached. Edge
here is represented as an integer that indicates one of eight directions north(1),
northeast(2), east(3), southeast(4), south (-1), southwest(-2), west(-3), and
northwest(-4).
Flung(x,y,speed,heading,xvel,yvel)
When a fling gesture (quick swipe) is made on the sprite: provides the (x,y) position
of the start of the fling, relative to the upper left of the canvas. Also provides the
speed (pixels per millisecond) and heading (0-360 degrees) of the fling, as well as the
x velocity and y velocity components of the fling’s vector.
NoLongerCollidingWith(other)
Event indicating that a pair of sprites are no longer colliding.
TouchDown(x,y)
When the user begins touching the sprite (places finger on sprite and leaves it there):
provides the (x,y) position of the touch, relative to the upper left of the canvas
TouchUp(x,y)
When the user stops touching the sprite (lifts finger after a TouchDown event):
provides the (x,y) position of the touch, relative to the upper left of the canvas.
Touched(x,y)
When the user touches the sprite and then immediately lifts finger: provides the (x,y)
position of the touch, relative to the upper left of the canvas.
Methods
Bounce(edge)
Makes this Ball bounce, as if off a wall. For normal bouncing, the edge argument
should be the one returned by EdgeReached.
CollidingWith(other)
Indicates whether a collision has been registered between this Ball and the
passed other sprite.
MoveIntoBounds()
Moves the sprite back in bounds if part of it extends out of bounds, having no effect
otherwise. If the sprite is too wide to fit on the canvas, this aligns the left side of the
sprite with the left side of the canvas. If the sprite is too tall to fit on the canvas, this
aligns the top side of the sprite with the top side of the canvas.
MoveTo(x,y)
Sets the x and y coordinates of the Ball. If OriginAtCenter is true, the center of
the Ball will be placed here. Otherwise, the top left edge of the Ball will be placed at
the specified coordinates.
MoveToPoint(coordinates)
Moves the Ball so that its origin is at the specified x and y coordinates.
PointInDirection(x,y)
Turns this Ball to point toward the point with the coordinates (x, y).
PointTowards(target)
Turns this Ball to point towards a given target sprite. The new heading will be
parallel to the line joining the centerpoints of the two sprites.
2.Canvas
A two-dimensional touch-sensitive rectangular panel on which drawing can be done
and sprites can be moved.
The BackgroundColor, PaintColor, BackgroundImage, Width, and Height of
the Canvas can be set in either the Designer or in the Blocks Editor.
The Width and Height are measured in pixels and must be positive.
Any location on the Canvas can be specified as a pair of (X, Y) values, where

• X is the number of pixels away from the left edge of the Canvas
• Y is the number of pixels away from the top edge of the Canvas

There are events to tell when and where a Canvas has been touched or a Sprite
(ImageSprite or Ball) has been dragged. There are also methods for drawing points,
lines, circles, shapes, arcs, and text.
Properties
BackgroundColor

Specifies the Canvas’s background color as an alpha-red-green-blue integer,


i.e., 0xAARRGGBB. An alpha of 00 indicates fully transparent and FF means opaque.
The background color only shows if there is no background image.
BackgroundImage
Specifies the name of a file containing the background image for the Canvas.
BackgroundImageinBase64
Set the background image in Base64 format. This requires API level >= 8. For devices
with API level less than 8, setting this will end up with an empty background.
ExtendMovesOutsideCanvas
Determines whether moves can extend beyond the canvas borders. Default is false.
This should normally be false, and the property is provided for backwards
compatibility.
FontSize
Specifies the font size of text drawn on the Canvas.
Height
Specifies the Canvas’s vertical height, measured in pixels.
HeightPercent
Specifies the Canvas’s vertical height as a percentage of the Screen’s Height.
LineWidth
Specifies the width of lines drawn on the Canvas.
PaintColor
Specifies the paint color as an alpha-red-green-blue integer, i.e., 0xAARRGGBB. An
alpha of 00 indicates fully transparent and FF means opaque.
TapThreshold
Specifies the movement threshold to differentiate a drag from a tap.
TextAlignment
Specifies the alignment of the canvas’s text: center, normal (starting at the specified
point in DrawText or DrawTextAtAngle), or opposite (ending at the specified point
in DrawText or DrawTextAtAngle).
Visible
Specifies whether the Canvas should be visible on the screen. Value is true if
the Canvas is showing and false if hidden.
Width
Specifies the horizontal width of the Canvas, measured in pixels.
WidthPercent
Specifies the horizontal width of the Canvas as a percentage of the Screen’s Width.
Events
Dragged(startX,startY,prevX,prevY,currentX,currentY,draggedAnySprite)

When the user does a drag from one point (prevX, prevY) to another (x, y). The pair
(startX, startY) indicates where the user first touched the screen, and
“draggedAnySprite” indicates whether a sprite is being dragged.
Flung(x,y,speed,heading,xvel,yvel,flungSprite)
When a fling gesture (quick swipe) is made on the canvas: provides the (x,y) position
of the start of the fling, relative to the upper left of the canvas. Also provides the
speed (pixels per millisecond) and heading (0-360 degrees) of the fling, as well as the
x velocity and y velocity components of the fling’s vector. The value “flungSprite” is
true if a sprite was located near the the starting point of the fling gesture.
TouchDown(x,y)
When the user begins touching the canvas (places finger on canvas and leaves it
there): provides the (x,y) position of the touch, relative to the upper left of the canvas
TouchUp(x,y)
When the user stops touching the canvas (lifts finger after a TouchDown event):
provides the (x,y) position of the touch, relative to the upper left of the canvas
Touched(x,y,touchedAnySprite)
When the user touches the canvas and then immediately lifts finger: provides the
(x,y) position of the touch, relative to the upper left of the canvas. TouchedAnySprite
is true if the same touch also touched a sprite, and false otherwise.
Methods
Clear()

Clears the canvas, without removing the background image, if one was provided.
DrawArc(left,top,right,bottom,startAngle,sweepAngle,useCenter,fill)
Draw an arc on Canvas, by drawing an arc from a specified oval (specified by left,
top, right & bottom). Start angle is 0 when heading to the right, and increase when
rotate clockwise. When useCenter is true, a sector will be drawed instead of an arc.
When fill is true, a filled arc (or sector) will be drawed instead of just an outline.
DrawCircle(centerX,centerY,radius,fill)
Draws a circle (filled in) with the given radius centered at the given coordinates on
the Canvas.
DrawLine(x1,y1,x2,y2)
Draws a line between the given coordinates on the canvas.
DrawPoint(x,y)
Draws a point at the given coordinates on the canvas.
DrawShape(pointList,fill)
Draws a shape on the canvas. pointList should be a list contains sub-lists with two
number which represents a coordinate. The first point and last point does not need
to be the same. e.g. ((x1 y1) (x2 y2) (x3 y3)) When fill is true, the shape will be filled.
DrawText(text,x,y)
Draws the specified text relative to the specified coordinates using the values of
the FontSize and TextAlignment properties.
DrawTextAtAngle(text,x,y,angle)
Draws the specified text starting at the specified coordinates at the specified angle
using the values of the FontSize and TextAlignment properties.
GetBackgroundPixelColor(x,y)
Gets the color of the given pixel, ignoring sprites.
GetPixelColor(x,y)
Gets the color of the given pixel, including sprites.
Save()
Saves a picture of this Canvas to the device’s external storage. If an error occurs, the
Screen’s ErrorOccurred event will be called.
SaveAs(fileName)
Saves a picture of this Canvas to the device’s external storage in the file named
fileName. fileName must end with one of “.jpg”, “.jpeg”, or “.png” (which determines
the file type: JPEG, or PNG).
SetBackgroundPixelColor(x,y,color)
Sets the color of the given pixel. This has no effect if the coordinates are out of
bounds.
3.ImageSprite
A ‘sprite’ that can be placed on a Canvas, where it can react to touches and drags,
interact with other sprites (Balls and other ImageSprites) and the edge of the Canvas,
and move according to its property values. Its appearance is that of the image
specified in its Picture property (unless its Visible property is false .
To have an ImageSprite move 10 pixels to the left every 1000 milliseconds (one
second), for example, you would set the Speed property to 10 [pixels],
the Interval property to 1000 [milliseconds], the Heading property to 180 [degrees],
and the Enabled property to true . A sprite whose Rotates property is true will rotate its
image as the sprite’s heading changes. Checking for collisions with a rotated sprite
currently checks the sprite’s unrotated position so that collision checking will be
inaccurate for tall narrow or short wide sprites that are rotated. Any of the sprite
properties can be changed at any time under program control.
Properties
Enabled

Controls whether the ImageSprite moves when its speed is non-zero.


Heading
The ImageSprite’s heading in degrees above the positive x-axis. Zero degrees is
toward the right of the screen; 90 degrees is toward the top of the screen.
Height
The height of the ImageSprite in pixels.
Interval
The interval in milliseconds at which the ImageSprite’s position is updated. For
example, if the Interval is 50 and the Speed is 10, then the ImageSprite will move
10 pixels every 50 milliseconds.
Picture
Specifies the path of the sprite’s picture.
Rotates
If true, the sprite image rotates to match the sprite’s heading. If false, the sprite
image does not rotate when the sprite changes heading. The sprite rotates around its
centerpoint.
Speed
The speed at which the ImageSprite moves. The ImageSprite moves this many
pixels every Interval milliseconds if Enabled is true .
Visible
Sets whether sprite should be visible.
Width
The width of the ImageSprite in pixels.
X
The horizontal coordinate of the left edge of the ImageSprite, increasing as the
ImageSprite moves right.
Y
The vertical coordinate of the top edge of the ImageSprite, increasing as the
ImageSprite moves down.
Z
How the ImageSprite should be layered relative to other Balls and ImageSprites, with
higher-numbered layers in front of lower-numbered layers.
Events
CollidedWith(other)

Event handler called when two enabled sprites (Balls or ImageSprites) collide. Note
that checking for collisions with a rotated ImageSprite currently checks against its
unrotated position. Therefore, collision checking will be inaccurate for tall narrow or
short wide sprites that are rotated.
Dragged(startX,startY,prevX,prevY,currentX,currentY)
Event handler for Dragged events. On all calls, the starting coordinates are where the
screen was first touched, and the “current” coordinates describe the endpoint of the
current line segment. On the first call within a given drag, the “previous” coordinates
are the same as the starting coordinates; subsequently, they are the “current”
coordinates from the prior call. Note that the ImageSprite won’t actually move
anywhere in response to the Dragged event unless MoveTo is specifically called.
EdgeReached(edge)
Event handler called when the ImageSprite reaches an edge of the screen.
If Bounce is then called with that edge, the sprite will appear to bounce off of the
edge it reached. Edge here is represented as an integer that indicates one of eight
directions north(1), northeast(2), east(3), southeast(4), south (-1), southwest(-2),
west(-3), and northwest(-4).
Flung(x,y,speed,heading,xvel,yvel)
When a fling gesture (quick swipe) is made on the sprite: provides the (x,y) position
of the start of the fling, relative to the upper left of the canvas. Also provides the
speed (pixels per millisecond) and heading (0-360 degrees) of the fling, as well as the
x velocity and y velocity components of the fling’s vector.
NoLongerCollidingWith(other)
Event indicating that a pair of sprites are no longer colliding.
TouchDown(x,y)
When the user begins touching the sprite (places finger on sprite and leaves it there):
provides the (x,y) position of the touch, relative to the upper left of the canvas
TouchUp(x,y)
When the user stops touching the sprite (lifts finger after a TouchDown event):
provides the (x,y) position of the touch, relative to the upper left of the canvas.
Touched(x,y)
When the user touches the sprite and then immediately lifts finger:
provides the (x,y) position of the touch, relative to the upper left of the canvas.
Methods
Bounce(edge)

Makes this ImageSprite bounce, as if off a wall. For normal bouncing,


the edge argument should be the one returned by EdgeReached.
CollidingWith(other)
Indicates whether a collision has been registered between this ImageSprite and the
passed other sprite.
MoveIntoBounds()
Moves the sprite back in bounds if part of it extends out of bounds, having no effect
otherwise. If the sprite is too wide to fit on the canvas, this aligns the left side of the
sprite with the left side of the canvas. If the sprite is too tall to fit on the canvas, this
aligns the top side of the sprite with the top side of the canvas.
MoveTo(x,y)
Moves the ImageSprite so that its left top corner is at the
specified x and y coordinates.
MoveToPoint(coordinates)
Moves the ImageSprite so that its origin is at the specified x and y coordinates.
PointInDirection(x,y)
Turns this ImageSprite to point toward the point with the coordinates (x, y).
PointTowards(target)
Turns this ImageSprite to point towards a given target sprite. The new heading will
be parallel to the line joining the centerpoints of the two sprites.

Files :
Non-visible component for storing and retrieving files. Use this component to write
or read files on the device.
The exact location where external files are placed is a function of the value of
the Scope property, whether the app is running in the Companion or compiled, and
which version of Android the app is running on.
Because newer versions of Android require files be stored in app-specific directories,
the DefaultScope is set to App. If you are using an older version of Android and need
access to the legacy public storage, change the DefaultScope property to Legacy. You
can also change the Scope using the blocks.
Below we briefly describe each scope type:

• App: Files will be read from and written to app-specific storage on Android 2.2 and
higher. On earlier versions of Android, files will be written to legacy storage.
• Asset: Files will be read from the app assets. It is an error to attempt to write to app
assets as they are contained in read-only storage.
• Cache: Files will be read from and written to the app’s cache directory. Cache is
useful for temporary files that can be recreated as it allows the user to clear
temporary files to get back storage space.
• Legacy: Files will be read from and written to the file system using the App Inventor
rules prior to release nb187. That is, file names starting with a single / will be read
from and written to the root of the external storage directory, e.g., /sdcard/. Legacy
functionality will not work on Android 11 or later.
• Private: Files will be read from and written to the app’s private directory. Use this
scope to store information that shouldn’t be visible to other applications, such as file
management apps.
• Shared: Files will be read from and written to the device’s shared media directories,
such as Pictures.

Note 1: In Legacy mode, file names can take one of three forms:
• Private files have no leading / and are written to app private storage (e.g., “file.txt”)
• External files have a single leading / and are written to public storage (e.g., “/file.txt”)
• Bundled app assets have two leading // and can only be read (e.g., “//file.txt”)

Note 2: In all scopes, a file name beginning with two slashes (//) will be interpreted
as an asset name.
Properties
DefaultScope

Specifies the default scope for files accessed using the File component. The App
scope should work for most apps. Legacy mode can be used for apps that predate
the newer constraints in Android on app file access.
ReadPermission
A designer-only property that can be used to enable read access to file storage
outside of the app-specific directories.
Scope
Indicates the current scope for operations such as ReadFrom and SaveFile.
WritePermission
A designer-only property that can be used to enable write access to file storage
outside of the app-specific directories.
Events
AfterFileSaved(fileName)

Event indicating that the contents of the file have been written.
GotText(text)
Event indicating that the contents from the file have been read.
Methods
AppendToFile(text,fileName)

Appends text to the end of a file. Creates the file if it does not already exist. See the
help text under SaveFile for information about where files are written. On success,
the AfterFileSaved

Game:
Development of the Snake game with App
Inventor
A Snake game is a classic and popular game in which the player
controls a snake that moves around the screen and eats food,
causing the snake to grow longer. The goal of the game is to make
the snake as long as possible without running into the walls or its own
tail.
It is possible to create a Snake game using App Inventor. App
Inventor is a visual, blocks-based programming environment that
allows you to create Android apps without writing any code. Here’s a
general outline of the steps you would need to take to create a Snake
game with App Inventor:

• Create a new project in App Inventor and design the user interface for
the game. This can include a Canvas component to display the game
board, and buttons or other components to control the snake’s
movement.
• Use the blocks editor to create the logic for the game. You can use
blocks such as “when button is clicked” to control the snake’s
movement, and “if-then” statements to detect when the snake collides
with the walls or its own tail.
• Create a food item that appears randomly on the screen, and make the
snake grow when it eats the food.
• Use variables to keep track of the snake’s position, length, and direction
of movement.
• Add a scoring system that keeps track of the player’s score and
implements a game over condition.
• Test and refine the game by running it on an emulator or a physical
device.
App Inventor has different components that you can use to create
your game such as the Timer, the Canvas, the ImageSprite and the
List, also you can use the if-else and loops blocks to control the game
logic.

Device Location :

The LocationSensor component is a simple control that is difficult to use without knowledge
of some basic concepts of geo-location. The LocationSensor is used to communicate with
the global positioning satellite receiver (GPS) in your phone/tablet. When
the LocationSensor communicates with the built-in GPS receiver, the GPS determines the
location of your device. The sensor can also work with network/wifi location services.
Finding a location through the network uses very different techniques than with a GPS.
Location means the device's present latitude and longitude, or it can mean your street
address. The measuring units employed in the LocationSensor for distance are meters. Time
is measured in milliseconds (ms). Be aware that one second = 1000 ms. and 60000 ms is
one minute.
When the sensor reports distance information or you set a distance into the component, the
units are in meters. If your app must deal in English units, use the Math blocks to convert
units at the time you display them. Calculate everything in meters, then convert to report the
result in feet or miles on your display. Think meters!

Example : How do you display your current latitude, longitude, and address

The LocationSensor.LocationChanged event is triggered

1) when the app first gets a reading from the GPS satellites or other
mechanisms,
2) when the phone or tablet’s location changes. If you are walking around,
it could trigger many times.

You can control how often LocationChanged is triggered with


the LocationSensor.TimeInterval and LocationSensor.DistanceInterval prop
erties. The TimeInterval is set to 60,000 ms, or 1 minute, by default. This
means that LocationChanged will only be triggered again after one minute.
In this sample, the location readings are displayed in labels. Latitude and
longitude are numbers, while the currentAddress property provides a street
address for your current location. As you walk around, you’ll see the
numbers and address change.

Web Browsing :
App Inventor has a component called WebViewer. We can load a webpage/website into an
app using WebViewer component. To do that, we can just drag the WebViewer component
to Screen1 window and set the HomeUrl property of the WebViewer to the webpage we
want to view when the app boots up.

Note that including a network protocol such as http or https is important, otherwise, the
webpage won’t load. If you try www.google.com, it won’t work, but http://www.google.com
will work. For our sample app, we will have a text box where a user can specify a URL and
press Go button to load the webpage. We will also have previous and next buttons to
navigate through pages.
This is how our app would look like-

We can set the HomeUrl either on the viewer/designer window or on the blocks editor. If
you’ve taken a look at the blocks snapshot below, we have set the HomeUrl on the blocks
editor under Screen1.Initialize event. There, we also set the previous and next buttons
Enabled property to false. When a user taps on Go button, we first check if the URL in the
text box contains http:// or https://, if it does, we simply load the webpage; if it doesn’t, we
append http:// at the beginning of the specified URL and load. Since we were able to load a
webpage, we set the previous button’s Enabled property to true. Note that if the text box is
empty and user presses Go, the viewer will try to load http:// which obviously is not valid.
You can always check if the text box is empty. If empty, don’t load anything. We didn’t here
because we want to see the default behavior when a URL is invalid.

When previous button is pressed, we enable the next button since there’s a page we can
forward to. We also check if any other pages we can go back to by
using CanGoBack procedure of WebViewer component. We do the opposite when next
button is pressed.
Download the source of this app CoolWebViewer

Hardware Back Button Support


So far we have done ok. But, we can only go back by pressing Previous button, not using
hardware back button. If we press device back button, app closes. Let’s modify the app. We
are going to add another screen. Don’t worry we don’t need to do that. I am doing that
because someone asked in the comment section below. First screen will only have a button
that opens Screen2 if pressed and Screen2 will have the components we had in the app
above. Let’s see what we mean-
Going to Screen2 is easy. On Screen1‘s Button1.Click event we do-

As you can see, we only opened Screen2. open another screen block can be found under
Control drawer on Blocks window. We have to modify what we did in our first app above to
achieve the functionality we want. Let’s take a look at the complete blocks view on Screen2

It’s very similar to what we did in our first app above. Changes are – we created a procedure
named GoBack so that both PrevButton.Click and Screen2.BackPressed can use it. The
only thing you need to pay attention to is Screen2.BackPressed event. We first check if
there’s a page to go back to, if we do, we call GoBack procedure. If none, we close the
current screen. Hence, we are going to the previous screen as we didn’t close Screen1.

You might also like