You are on page 1of 11

ValueLabs- WHS Bi-Weekly Summary

Action Items
S.No. Action Items Current Status Comment, if any
1 As per Jeff suggestion, we have working on following areas. In Progress Tracking details of Actual vs planned to identify the
•On-time Delivery (taking initiatives) delays/blockers etc.
•Quality Summary of last sprint items as current sprint is in progress:
Out of 26 items (VSE + Coaching dashboard).
• Completed – 11
• In Progress - 10
• Open - 2
• Code Review - 1
• Testing - 2

2 FDB October Month Data load On hold FDB Data load for October month on upper environment is
still not completed. As per Jeff’s email, Ryan will complete it
partially and will train other DBA. Started working on to get a
copy of Med Knowledge manual for FDB as requested today.

3 FDB December Month Data load Work is in progress. we will execute it till 4th steps only, then will resume
remaining steps execution after Oct’19 deploy completion.

4 OKRs In Progress Team has started capturing it as per data availability.


5 Possibility to configure the JIRA report to fetch OKRs metrics Open Jeff is still trying to figure out the possibilities.
6 Code Coverage Open Jenkins code coverage is broken as of now. Jeff has shared
one more plugin, will need to discuss on this further.

7 On-time Delivery Metrics Sheet format change Filling the sub-tasks details (date/status) in Google
spreadsheet proposed by ValueLabs is a time consuming
process. We would like to make it simpler, if John/Jeff get
agreed
Invitational Challenges + Health Coach + DHA
Ups Downs

Ultraviolet: Ultraviolet:
• Team has completed development of main health coach stories most of • VSE-2484: refactored the code as got review comments during code review (like
the stories are now in testing phase. removal of PHR client as it is used in legacy code). Faced technical challenges while
writing unit test cases for file upload. Also PR review got delayed due to holiday.
• Automation: in Nov’19 3 sprint, QA team completed VSE-4942. In current • VSE-5942: Onsite PR review was pending since long time, due to that automation
sprint, VSE-5221 is in code review state covering end to end flow of story got delayed.
existing tests. • VSE-2943: Awaited for the UI mockup, also done some R&D to support JsonPatch
library in StepBattle Service. Later implemented JsonPatch from scratch, updated
• VSE-2121- we are in good shape, fixed all the issues which were reported the existing the endpoint..
earlier after AjaxControlToolkit dll upgrade done by other team, testing of • VSE-5370: unable to find the location to insert a new weblet. Later had discussion
other modules is in progress. with Julie and got the required information.
• Team Work Creditability : It is being observed that Keerthy is putting her name on
• Received response on FDB data load today, will start work on December the JIRA tickets which is already assigned to QA team member.
month upload. o VSE-5221: Sutapa has provided her inputs as she created story and
Keerthy was about to discuss with Jeff but later Keerthy has assigned to
Thor: herself.
o VSE-3525: Sai has completed almost all the testing but at the last moment,
Keerthy has started to work it.
o VSE-4014 – Deepthi was assigned as QA owner but when story got moved
into testing, she had put her name without discussing with anyone,
Deepthi has already written test cases for that.
Thor:
Sprint Metrics for Last 5 Sprints – Productivity & Defects (IC + HC)
Productivity by story points (Done +Demoable) # User Stories and Defects Planned vs Delivered
80 75
70.5 60
66 50 50
57 60 63
50
60 51 40
39 44
42.5 36
Story points

40

User Stories
31 34
40
30 23 26
20 19 17 16 18
20 14 12 14
0 10
October 2019 October 2019 November 2019 November 2019 November 2019
Sprint 1 Sprint 2 Sprint 1 Sprint 2 Sprint 3 0
Oct 2019 S1 Oct 2019 S2 Nov 2019 S1 Nov 2019 S2 Nov 2019 s3

Sprint Planned Delivered Spilled over

# of defects by sprint Inference:


 Team delivered good number of story points in November 2019 Sprint 3.
o Completed most of the health coach stories planned for this release.
10 9
8  QA team member logged 6 issues .
8 7 o Out of 6, VSE-5526 is not a issue, new scheduler is correctly mimicking
6 6
6 the aspx dialer.
Story points

o Two Issues VSE-5563 and VSE-5576 has pulled into current sprint.
4
o Rest 3 UI issues VSE-5561 (Ui-trackers), VSE-5564 and VSE-5565
2 ( Mobile View) have moved into backlog.
0
October 2019 October 2019 November 2019 November 2019 November 2019
Sprint 1 Sprint 2 Sprint 1 Sprint 2 Sprint 3
Sprint
Sprint Metrics for Last 5 Sprints – Productivity & Defects (DHA)
Productivity by story points # User Stories and Defects Planned vs Delivered
60 56 50 43
50 42.5 38
40
40 33 29 30 30

User Stories
30
Story points

3028 26 22
19
22 21
20 14 15 16 14 16
20 11
10 10

0 0
August 2019 Sept 2019 Sprint Sept 2019 Sprint Oct 2019 Sprint 1 Oct 2019 Sprint 2 August 2019 Sept 2019 Sept 2019 Oct 2019 Sprint Oct 2019 Sprint
Sprint 2 1 2 Sprint 2 Sprint 1 Sprint 2 1 2
Sprint Planned Delivered Spilled over

# of defects by sprint Inference:


 Few stories (VSR-1385, VSR-1421, VSR-1493, VSR-1816, VSR-1817)
could not delivered in time due to time machine issue.
7
6
6
5
4 4
Story points

4
3
3 2.5
2
1
0
August 2019 Sept 2019 Sprint Sept 2019 Sprint Oct 2019 Sprint 1 Oct 2019 Sprint 2
Sprint 2 1 2
Sprint
0
1
2

0.5
1.5
2.5
C o d e R e v ie w 2

V S E - 5 0 0 5 : T e s t in g
1

V S E - 2 8 7 1 : T e s t in g is d e p e n d e n t o n o t h e r s t o r y
1

V S E - 3 4 5 7 : L iv e e n v ir o n m e n t r e la t e d is s u e - O p e n
1

V S E - 5 4 6 9 : n e w f e a t u r e w ill b e d o n e in E p ic 2 - O p e n
1

V S E - 2 4 8 4 : G o t m a jo r r e v ie w c o m m e n t s ( In - P r o g r e s s )
1

V S E - 2 9 4 3 : In v it a t io n a l - In P r o g r e s s
2

V S E - 4 9 9 4 , V S E - 4 8 7 4 ( a s s ig n e d t o T o m , M c C lu s k e y )
1

V S E - 5 2 2 1 : A u t o m a t io n S t o r y - In P r o g r e s s
1

V S E - 5 2 2 6 : it ’s n o t a b u g ,J e f f d id c o d e c h a n g e s in c u r r e n t s p r in t
Sprint Metrics for Nov 2019 sprint 3 – Break up by Spill over reasons

V S E - 2 1 2 1 : P H M ( 3 4 s t o r y p o in t s ) - In P r o g r e s s
1

V S E - 5 3 7 0 : R e s e a r c h a n d P O C f o r w e b le t - In P r o g r e s s
Sprint Metrics for Nov 2019 sprint 3 – Break up by Spill over reasons(DHA)
10
9
9

8
7
7

6
5
5

1
0
0
Testing

In Progress
Open
Code Review
WHS - OKRs
Objective Key Result Measurement Criteria/Guidelines R/ A/ G Comments Nov Sprint 1

Checklist for Behavior Change Team Developers (found on


Confluence) is used for all user stories.  All sections that are marked Will start following it from next
"No" (not complied with) for a user story have a Jira comment from a sprint.
Development Lead that accepts this checklist deviation.

As such we do not measure


Achieve unit test coverage greater than 90% on newly written code, • >=90% - Green it accurately as we can not
unless specific user story exemption is granted by an Portland team • 70 to 90% - Amber Can't measure now differentiate between new
lead. • < 70 % - Red /old code in Jenkins. At
present, code coverage is
broken in Jenkins

Reduce iterations to less than 3 and maximum 4 per each story from • 1 to 2 - Green The number might not be
accurately calculated as we
date beginning In-Progress state to Demo state, ensuring efficient • 3 to 4 - Amber G
have not counted when 1.08(13/12)
cycle times. • >  4 - Red
raising a PR first time.

We would be needing feature


release information to track it.
Maintain average number of post-release QA defects found as 1 per • < 1 defect per completed user story - Green
How to measure it as we do
completed user story.  (These are QA defects found after the user • 1 to 2 defects per completed user story -
not pull back the story into In-
story has been demoed, accepted by the product manager, and then Amber progress state in the current
deployed to Staging and/or Live.) • > 2 defects per completed user story - Red
sprint if any issue got
Improve the code efficiency reported by support team.

10/31: Do we need to
consider those defects
Maintain average number of pre-release QA defects found as 2 per
completed user story.  (These are QA defects found before the user • < 2 defect per user story - Green which are linked with
• 2 to 4 defects per completed user story - stories? What about those
story has been moved to the Done state where the unit of
Amber
G
bugs which are pre-existing 0.5 (6/12)
measurement is the movement of the user story backwards from the
Demo state or the creation of a new Jira bug story.) • > 5 defects per completed user story - Red but reported during the
testing of a story?

First time Demo Story


Acceptance rate during
I believe that we need to
provide the user story % based sprint demo (i.e. The story is
accepted the first time that it
Achieve user story demo acceptance by Team's Product Manager and • > 95% - Green on what ultraviolet QA team
is demoed during a Sprint
Portland Development team members by ensuring that the intent of • 70 to 95% - Amber gives the demo during their demo rather than being
the user stories are realized. • < 70% - Red sign off meeting and Julie use
to accept them, if yes, will start moved back to In-Progress or
Testing.  Rate = # of stories
capturing it for current sprint)
accepted/total stories that
were demoed)
WHS – OKRs (Continue..)
Objective Key Result Measurement Criteria/Guidelines R/ A/ G Comments October
Sprint 2

Deliver at least 70% of user stories that are in the sprint on • > 70% of user stories - Green As such our contribution is
the sprint's start date AND are also marked with a priority • 50 to 70% of user stories -  Amber negligible (< 40-50%) for
of major, critical, or blocker. • <50% of user stories - Red blocker/critical issues since
Deliver on commitments most of the time, we got
issues replicable on live
environment and team
does not have access.

Achieving minimum productivity of 0.20 story points per • > 0.20 points per workdays worked - Green G 0.70( (63/9)/10) )
workdays actually worked averaged over a rolling two • 0.10 to 0.21 - Amber [9 total
sprints. • <0.10 - Red developer, 10
working days)

Quality Achieve 90% of test automation coverage for user stories • > 90% - Green Need more
that relate to new product capabilities, measured two • 60 to 90% - Amber clarification
sprints after the user story is moved to the Done • <60% - Red
state.  (Note:  This excludes user stories for product
capabilities that are currently in Production/on the Live
environment.)

Achieve 95% of defects found through manual and • >95% - Green G Automation just started 2 100% (6/6)
automation testing measured on a rolling for sprint • 90 to 95% - Amber months ago, work is bit
basis  (# found through manual + automation + other • <90% - Red slow as we will be doing
Behavior Change team members) / (# found through only API end point testing
manual + automation + other Technology Department using pester.
teams  + Clients)

Achieve zero incidents per sprint as a result of user story • Zero Incidents - Green G 0
and bug fix releases.  No product or product capabilities • 1 Incident - Amber
down on Production that were previously • 2 or more Incidents - Red
functional.  (Note:  There have been zero incidents for the
past six months as a result of the work that Ultraviolet has
completed.)
WHS(THOR) - OKRs
Objective Key Result Measurement Criteria/Guidelines R/ A/ G Comments October
Sprint 2

Reduce iterations to max 2 per each 1. Average <=1 – Green G The number might not be accurately calculated 0.0
Pull Request (Dev or QA) 2. 1 to 1.5 - Amber as we do not count when raising a PR first time.
3. > 1.5 – Red
Improve the code
efficiency
Increase unit test coverage for 1. >=70% - Green Can't measure We do not have build history in Jenkins.
existing codebase to 70 % 2. 60-70% - Amber now Working on identifying the mechanism.
(Needs support from WebMD) 3. Less than 60% - Red

Increase unit test coverage for new 1. <=90% - Green Can't measure We do not have build history in Jenkins.
code to 90%. 2.70-90% - Amber now Working on identifying the mechanism.
(Needs support from WebMD) 3. Less than 70% - Red

Maintain average number of QA 1. Average <2 – Green G 0.27


defects found as 2 per completed 2. 2 to 3 - Amber
user story 3. > 3 – Red

Deliver on Delivering at least 65% of sprint 1. >=65% - Green R Due to scope change, sometimes requirement 50.90%
commitments commitments 2. 50%-65% - Amber were finalized.
3. <50% - Red

Achieving min Productivity of 3 1. >=3 – Green G (included developers only) 11.2


points in each sprint ( 2 weeks) 2 2-3 - Amber
3. <2 – Red

Quality Achieving 90% of automation 1. >=90% - Green N/A N/A N/A


coverage for sprint stories 2. 80%-90%- Amber
3. <80% Red

Achieve 90% of defects found 1. <=90% - Green Can't measure Automation just started 2 months ago, work is
through manual/automation 2.70-90% - Amber now bit slow as we will be doing only API end point
3. Less than 60% - Red testing using pester.

100% process compliance 1. No process deviations incidents at all – Green G


2. 1-3 incidents per month- Amber
3. >3 incidents per month –Red
Thank You!

Doing the right thing.


Always.

www.valuelabs.com Hyderabad : +91 40 6623 9000

You might also like