Explore Ebooks

Categories

Explore Audiobooks

Categories

Explore Magazines

Categories

Explore Documents

Categories

100%(2)100% found this document useful (2 votes)

97 views21 pages© Attribution Non-Commercial (BY-NC)

PDF, TXT or read online from Scribd

Attribution Non-Commercial (BY-NC)

100%(2)100% found this document useful (2 votes)

97 views21 pagesAttribution Non-Commercial (BY-NC)

You are on page 1of 21

6.825 – Project 2

11/4/2004

After executing our variable elimination procedure, we obtained the following results for

each of the queries below.

For the sake of easy analysis of the PropCost probability distributions obtained

throughout this project from the insurance network, we define the function f to be a

weighted average across the discrete domain, resulting in a single scalar value

representative of the overall cost. More specifically,

<[Burglary] = [true]> = 0.284171835364393

<[Earthquake] = [true]> = 0.17606683840507917

<[PropCost] = [Million]> = 0.02709352198178344

<[PropCost] = [TenThou]> = 0.3427002442093675

<[PropCost] = [Thousand]> = 0.45722754191243536

(f = 48275.62)

These results are consistent with those obtained by executing the given enumeration

procedure, and those given in Table 1 of the project hand-out.

A. Insurance Network Queries

Liu, Smith 2

MakeModel = SportsCar)

based on the network as illustrated in Figure 1 of the handout, we

expect that the driver would be less risk averse, the driver would have

more money, the car would be of higher value. All of these things

should cause the cost of insurance to “go up,” relative to our previous

query which did not involve any evidence about the MakeModel of the

car. An increase in the PropCost domain sense means that the

probability distribution should be shifted towards the higher cost

elements of the domain (e.g. Million might have a higher probability

than Thousand).

thousand dollars greater in this case relative to that from Section 1.3.

<[PropCost] = [Million]> = 0.03093877334365239

<[PropCost] = [TenThou]> = 0.34593039737969233

<[PropCost] = [Thousand]> = 0.45133749255661565

(f = 52028.74)

GoodStudent = True)

GoodStudent, then the overall cost of insurance goes up. This

follows from the network as shown in Figure 1 of the project handout,

i.e. GoodStudent is only connected to the network through two

parents: Age and SocioEcon. Since Age is an evidence variable,

SocioEcon is the only node affected by the augmentation of

GoodStudent to the evidence. More specifically, if the adolescent

driver is a good student, they are likely to have more money, and thus

drive fancier cars, be less risk averse, et cetera.

the proper evidence. More specifically, f is a little less than four

thousand dollars greater in this case relative to that from Section 1.3.

<[PropCost] = [Million]> = 0.029748793596801583

<[PropCost] = [TenThou]> = 0.32771416728772235

<[PropCost] = [Thousand]> = 0.4587902473538701

(f = 51859.40)

Liu, Smith 3

<[N112] = [1]> = 0.01195999957730707

<[N143] = [1]> = 0.10000000303882783

A. Histograms

Random Elimination Ordering: Problem 1

6000

5000

4000

3000

2000

1000

0

1 2 3 4 5 6 7 8 9 10

Trials

Adolescent, Antilock = False, Mileage = FiftyThou, MakeModel =

SportsCar).

Liu, Smith 4

Random Elimination Ordering: Problem 2

6000

5000

4000

3000

2000

1000

0

1 2 3 4 5 6 7 8 9 10

Trials

Adolescent, Antilock = False, Mileage = FiftyThou, GoodStudent = True).

Random Elimination Ordering: Problem 3

6000

5000

4000

3000

2000

1000

0

1 2 3 4 5 6 7 8 9 10

Trials

"1", N116 = "0").

Liu, Smith 5

Random Elimination Ordering: Problem 4

6000

5000

4000

3000

2000

1000

0

1 2 3 4 5 6 7 8 9 10

Trials

"0", N121 = "1").

B. Discussion

Error! Reference source not found. through Error! Reference source

not found. illustrate the running time of a random order variable

elimination algorithm for each of the problems in Task 2 of the project

handout. We ran the algorithm ten times for each problem. For each bar,

if there it is stacked with a purple bar on top of it, then the heap ran out of

memory during that execution. In this case, we know that the execution

would have taken at least the amount of time illustrated by the blue bar,

the time it executed before running out of memory. We suppose that

each execution where the computer ran out of memory would have taken

at least 5000 seconds to complete.

It is worth noting that the time taken on the successful runs (the samples

without a purple bar) is much lower than the time taken to execute the

unsuccessful runs before they crashed. I.e. the successful blue bars tend

to be shorter than the unsuccessful blue bars. This indicates that either

random ordering tends to get it very right or very wrong.

Liu, Smith 6

A. Histograms

1.4

1.2

1

0.8

0.6

0.4

0.2

0

1 2 3 4

Problem Number

problems.

Insurance – 1 0.629

Insurance – 2 1.086

Carpo – 1 0.088

Carpo – 2 0.087

Table 1. Average time of execution for variable elimination for the problems

from Task 2. Averages are constructed across ten independant runs each,

which are illustrated in Figure 5.

B. Discussion

As can be seen from

Table 1, the time needed for variable elimination is much smaller for a

greedy elimination ordering versus a random ordering. This makes a lot

of sense, because the random ordering could happen to eliminate a

parent of many children, creating a huge factor which slows down the

algorithm and eats up memory. On the contrary, greedy ordering variable

elimination works very well. Even in the cases from Section 3 in which

we did not run out of memory, the greedy algorithm tends to be about

100-200 times faster.

Liu, Smith 7

Functionality

Each of our results below look like they are in the right neighborhood. We

give more explicit quality results in the problems that follow this one.

<[Burglary] = [true]> = 0.4551612029260302

<[Earthquake] = [true]> = 2.8417163967036946E-4

<[PropCost] = [Million]> = 0.021563876240368398

<[PropCost] = [TenThou]> = 0.35877461270610517

<[PropCost] = [Thousand]> = 0.44861060067149516

MakeModel = SportsCar)

<[PropCost] = [Million]> = 0.030620517617711222

<[PropCost] = [TenThou]> = 0.35048331774243846

<[PropCost] = [Thousand]> = 0.4555035859058312

GoodStudent = True)

<[PropCost] = [Million]> = 0.032866049889275516

<[PropCost] = [TenThou]> = 0.30414914618811645

<[PropCost] = [Thousand]> = 0.46121321229624807

<[N112] = [1]> = 0.00898716978823346

Liu, Smith 8

<[N143] = [1]> = 0.08275054367376986

<[Burglary] = [true]> = 0.29

<[Earthquake] = [true]> = 0.158

<[PropCost] = [Million]> = 0.01

<[PropCost] = [TenThou]> = 0.355

<[PropCost] = [Thousand]> = 0.5750000000000001

MakeModel = SportsCar)

<[PropCost] = [Million]> = 0.011

<[PropCost] = [TenThou]> = 0.34

<[PropCost] = [Thousand]> = 0.559

GoodStudent = True)

<[PropCost] = [Million]> = 0.038

<[PropCost] = [TenThou]> = 0.372

<[PropCost] = [Thousand]> = 0.377

<[N112] = [1]> = 0.03

Liu, Smith 9

<[N143] = [1]> = 0.078

A. Results

4.00E-03

3.50E-03

3.00E-03

KL Divergence

2.50E-03

2.00E-03

1.50E-03

1.00E-03

5.00E-04

0.00E+00

0 200 400 600 800 1000

Size of Prefix Thrown Away

Each run used 2000 samples, and threw away the first x samples, the

independant variable expressed on the x-axis.

1.20E-03

Average KL Divergence

1.00E-03

8.00E-04

6.00E-04

4.00E-04

2.00E-04

0.00E+00

0 200 400 600 800 1000

Size of Prefix Thrown Away

Liu, Smith 10

B. Discussion

In this analysis, we ran the Gibbs sampler with 2000 samples on the

same problem (Carpo – 1). For each iteration, we threw away a variable

number of the first samples. The idea is that since Gibbs sampling is a

Markov Chain algorithm, each sample highly depends on the samples

before it. Since we choose a random initialization vector for each

variable, it can take some “burn in” time before the algorithm begins to

settle into the right global solution.

have a fairly nice characteristic curve as can be seen in the average

graph, with the only exception being when we threw away the first 600

samples. Looking at each run, however, at x = 600 there was a single

outlier with an extremely high KL divergence; we can ignore it based on

the many runs that we did. It seems that the ideal “burn in” time, a trade-

off between good initialization and diversity of counted samples, is 800

samples.

A. Results

We present results indexed first by the algorithm (Likelihood Weighting,

then Gibbs Samples) and then by the problem. Within each problem we

display two graphs: the first showing the results from ten iterations, and

the second showing the average KL divergence across each iteration.

Liu, Smith 11

1. Likelihood Weighting

Likelihood Weighting - Problem Insurance1

7.00E-02

6.00E-02

5.00E-02

KL Divergence

4.00E-02

3.00E-02

2.00E-02

1.00E-02

0.00E+00

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

)Number of Samples (x1000

Age = Adolescent, Antilock=False, Mileage = FiftyThou, MakeModel = SportsCar).

Problem Insurance1

0.035

0.03

0.025

KL Divergence

0.02

0.015

0.01

0.005

0

00

00

00

00

00

00

00

00

00

00

00

0

0

10

20

30

40

50

60

70

80

90

10

11

12

13

14

15

16

17

18

19

20

Number of Samples

P(PropCost | Age = Adolescent, Antilock=False, Mileage = FiftyThou, MakeModel =

SportsCar) to sample sizes between 100 and 2000.

Liu, Smith 12

7.00E-02

6.00E-02

5.00E-02

Divergence

4.00E-02

3.00E-02

2.00E-02

1.00E-02

0.00E+00

0

0

00

00

00

00

00

00

00

00

00

00

00

10

20

30

40

50

60

70

80

90

10

11

12

13

14

15

16

17

18

19

20

Sample Size

= Adolescent, Antilock=False, Mileage = FiftyThou, GoodStudent = True).

Problem Insurance2

0.02

0.018

0.016

KL Divergence

0.014

0.012

0.01

0.008

0.006

0.004

0.002

0

00

00

00

00

00

00

00

00

00

00

00

0

0

10

20

30

40

50

60

70

80

90

10

11

12

13

14

15

16

17

18

19

20

Number of Samples

P(PropCost | Age = Adolescent, Antilock=False, Mileage = FiftyThou, GoodStudent

= True).

Liu, Smith 13

8.00E-02

7.00E-02

6.00E-02

5.00E-02

4.00E-02

3.00E-02

2.00E-02

1.00E-02

0.00E+00

1000

1100

1200

1300

1400

1500

1600

1700

1800

1900

2000

100

200

300

400

500

600

700

800

900

Number of Samples

"3", N113 = "1", N116 = "0").

Problem Carpo1

0.05

0.045

0.04

KL Divergence

0.035

0.03

0.025

0.02

0.015

0.01

0.005

0

00

00

00

00

00

00

00

00

00

00

00

0

0

0

0

0

0

0

10

20

30

40

50

60

70

80

90

10

11

12

13

14

15

16

17

18

19

20

Number of Samples

N64 = "3", N113 = "1", N116 = "0").

Liu, Smith 14

2.50E-02

2.00E-02

1.50E-02

1.00E-02

5.00E-03

0.00E+00

1000

1100

1200

1300

1400

1500

1600

1700

1800

1900

2000

100

200

300

400

500

600

700

800

900

Number of Samples

"1", N116 = "0", N121 = "1").

Problem Carpo2

0.007

0.006

0.005

KL Divergence

0.004

0.003

0.002

0.001

0

0

0

0

0

0

0

0

0

0

00

00

00

00

00

00

00

00

00

00

00

10

20

30

40

50

60

70

80

90

10

11

12

13

14

15

16

17

18

19

20

Number of Samples

N146 = "1", N116 = "0", N121 = "1").

Liu, Smith 15

2. Gibbs Sampling

Samples for Problem 1

1.2

0.8

0.6

0.4

0.2

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

11000

12000

13000

14000

15000

16000

17000

18000

19000

20000

21000

22000

23000

24000

25000

Number of Samples

Figure 14. Divergences resulting from Gibbs Sampling applied to P(PropCost | Age

= Adolescent, Antilock = False, Mileage = FiftyThou, MakeModel = SportsCar) for

sample sizes between 1000 and 25000.

Liu, Smith 16

of Samples for Problem 1

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

11000

12000

13000

14000

15000

16000

17000

18000

19000

20000

21000

22000

23000

24000

25000

Number of Samples

P(PropCost | Age = Adolescent, Antilock = False, Mileage = FiftyThou, MakeModel

= SportsCar) for sample sizes between 1000 and 25000.

Samples for Problem 2

1.2

0.8

0.6

0.4

0.2

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

11000

12000

13000

14000

15000

16000

17000

18000

19000

20000

21000

22000

23000

24000

25000

Number of Samples

Figure 16. Divergences resulting from Gibbs Sampling applied to P(PropCost | Age

= Adolescent, Antilock = False, Mileage = FiftyThou, GoodStudent = True) for

sample sizes between 1000 and 25000.

Liu, Smith 17

of Samples for Problem 2

0.25

0.2

0.15

0.1

0.05

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

11000

12000

13000

14000

15000

16000

17000

18000

19000

20000

21000

22000

23000

24000

25000

Number of Samples

P(PropCost | Age = Adolescent, Antilock = False, Mileage = FiftyThou,

GoodStudent = True) for sample sizes between 1000 and 25000.

Samples for Problem 3

4.00E-03

3.50E-03

3.00E-03

2.50E-03

2.00E-03

1.50E-03

1.00E-03

5.00E-04

0.00E+00

00

00

00

00

00

0

00

00

00

00

00

00

00

00

10

30

50

70

90

11

13

15

17

19

21

23

25

Number of Samples

Figure 18. Divergences resulting from Gibbs Sampling applied to P(N112 | N64 =

"3", N113 = "1", N116 = "0") for sample sizes between 1000 and 25000.

Liu, Smith 18

of Samples for Problem 3

1.20E-03

1.00E-03

8.00E-04

6.00E-04

4.00E-04

2.00E-04

0.00E+00

00

00

00

00

00

0

00

00

00

00

00

00

00

00

10

30

50

70

90

11

13

15

17

19

21

23

25

Number of Samples

Figure 19. Average Divergence resulting from Gibbs Sampling applied to P(N112 |

N64 = "3", N113 = "1", N116 = "0") for sample sizes between 1000 and 25000.

Samples for Problem 4

0.035

0.03

0.025

0.02

0.015

0.01

0.005

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

11000

12000

13000

14000

15000

16000

17000

18000

19000

20000

21000

22000

23000

24000

25000

Number of Samples

Figure 20. Divergences resulting from Gibbs Sampling applied to P(N143 | N146 =

"1", N116 = "0", N121 = "1") for sample sizes between 1000 and 25000.

Liu, Smith 19

of Samples for Problem 4

0.014

0.012

0.01

0.008

0.006

0.004

0.002

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

11000

12000

13000

14000

15000

16000

17000

18000

19000

20000

21000

22000

23000

24000

25000

Number of Samples

Figure 21. Average divergence resulting from Gibbs Sampling applied to P(N143 |

N146 = "1", N116 = "0", N121 = "1") for sample sizes between 1000 and 25000.

B. Discussion of Results

Four interesting things:

As seen from the figures in Section 7.A.1, Likelihood weighting tends

to converge after about 500 samples, but always after 1000 in our

problems and analyses.

the same time, if not better. It turns out that Gibbs takes much longer;

it typically converges by 5000 samples, a full order of magnitude

higher, as can be seen from the figures in Section 7.A.2. This is likely

because of the Markov Chain approach used; since each sample

depends on the ones before it, it can take many iterations before the

algorithm settles into the global optima, whereas likelihood weighting

by definition discovers the appropriate probabilities (i.e. weights).

The convergence of Likelihood Weighting in Problem 3, as illustrated

in Figure 10 and Figure 11, exhibits very interesting properties. In the

other problems, likelihood weighting runs tended to exhibit relatively

low variance in time to convergence. However, here we see some

runs which converged very quickly, and others that took abnormally

Liu, Smith 20

problem, and thus is likely induced by some characteristic in the

problem; one likely explanation is that our query variable is a leaf

node in a very poly-tree-like network.

3. Convergence is logarithmic

This is an evident feature of all of the graphs, but has enormous

implications for a choice of algorithms.

right answer. In the case of the sampling methods that we surveyed,

unfortunately it takes an infinite time to arrive at the right answer.

However, it is important to note that variable elimination always

arrives at the exact answer. Thus, if a user needs completeness (i.e.

the right answer), they should probably use variable elimination.

want to be x% right, they still cannot rely on sampling methods. This

gives rise to the x% correct y% of the time metric. We certainly see

this from our graphs.

This is a very interesting point. In both problems 3 and 4 from Task 2

under Gibbs sampling, one of the runs from each of these problems

do not converge to zero. Instead, they seem to converge to a local

optima (which is not the global optima). This can be seen in the pink

line in Figure 18 and the jungle green line in Figure 20.

probably construct a very simple network that would not provoke this

behavior.

In comparing the computation time of sampling methods to variable

elimination, we limit ourselves to discussion of greedy ordering variable

elimination; since random ordering is very sub-optimal (see Section 4).

It turns out that for the networks and queries that we considered, variable

elimination is the champ on both accuracy and speed. As can be seen

from Table 2, variable elimination performed in near-second times on each

problem, while Gibbs took about 15 seconds and Likelihood Weighting

took around 5 seconds.

This is with 1000 samples for the sampling algorithms, and an effective

infinite samples for variable elimination.

Our results might have been different if the networks involved were much

more dense (i.e. connected) or much larger.

Liu, Smith 21

Insurance Insurance . .

1 2 Carpo Carpo

1 2

Variable 0.741 1.142 0.120 0.090

Eliminatio

n

Gibbs 12.778 13.530 19.228 18.045

Sampling

Likelihoo 4.377 4.687 5.608 5.317

d

Weighting

Table 2. Execution time of various algorithms on the four problems from Task 2.

- Assignment 2Assignment 2Adhelia Bella
- 14 Analysis of Algorithms14 Analysis of AlgorithmsLuís Miguel
- Industrial Statistics - A Computer Based Approach with PythonIndustrial Statistics - A Computer Based Approach with Pythonhtapiaq
- ch_9ch_9Maulik Chaudhary
- Wk8_TuteWk8_TuteLex
- 5.Least Square Regression Model - Jupyter Notebook5.Least Square Regression Model - Jupyter NotebookAnuvidyaKarthi
- SIMULATION PPTSIMULATION PPTBhushan Shende
- 1.3 Statistical Concept1.3 Statistical Conceptrollickingdeol
- 3-bayesian-network-inference-algorithm3-bayesian-network-inference-algorithmJoey Smith
- Ejercicios resueltos matemática discreta.Ejercicios resueltos matemática discreta.Edgar A Barrantes Brais
- Exam1_Part_3_McKinneyExam1_Part_3_McKinneyCreative Enterprises
- Pratik Zanke Source CodesPratik Zanke Source Codespratik zanke
- 2020 End Term Solutions2020 End Term SolutionsSIRSHA PATTANAYAK
- Assignment No 1,2,3,4Assignment No 1,2,3,4Hadia Azhar2558
- Class Test 2 Solution 2019Class Test 2 Solution 2019Ne Pa
- Exercise Prod Design PracticingExercise Prod Design PracticingLaura Alós
- 330_Lecture2_2015.pdf330_Lecture2_2015.pdfAnonymous gUySMcpSq
- Mat 2377 Final 2011Mat 2377 Final 2011David Lin
- Iteration and Optimization 2Iteration and Optimization 2SeungWoo LEE
- 20122012Akshay Jain
- R programing basics to advanceR programing basics to advanceDushyant Tara
- Lecture 6Lecture 6SudamBehera
- IndustrilIndustrilBasharat Saigal
- Production PlanningStochastic Programming Models for Production PlanningProduction PlanningStochastic Programming Models for Production Planningvague
- Specimen QP - Paper 2 Edexcel Maths AS-levelSpecimen QP - Paper 2 Edexcel Maths AS-levelMuhammad Muhammad
- GB MockexamGB MockexamAnonymous 7K9pzvzi
- 9805 MBAex PredAnalBigDataMar229805 MBAex PredAnalBigDataMar22harishcoolanand
- IntroCS Part 7IntroCS Part 7Alfred Fred
- Suggested solutions for assignment set 2.pdfSuggested solutions for assignment set 2.pdfNiilo Pitko
- Machine Learning notes btechMachine Learning notes btechorignole content
- Advanced statistics-projectAdvanced statistics-projectvivek r
- 316 Resource 10(e) Simulation316 Resource 10(e) SimulationMithilesh Singh Rautela
- Susan Rodgers Week 1 ExercisesSusan Rodgers Week 1 Exercisessusan_rodgers
- Simple Statistics functions in RSimple Statistics functions in RTony
- NR 311303 Operations ResearchNR 311303 Operations ResearchSrinivasa Rao G
- HW3Hints.3232HW3Hints.3232이경은
- Control ChartsControl Chartskr_padmavathi
- Modeling & Simulation Lab- 1Modeling & Simulation Lab- 1maneesh
- Lab 3. Linear Regression 230223Lab 3. Linear Regression 230223ruso
- 1.4. Analysis of Algorithms1.4. Analysis of AlgorithmsMinea George
- Adipose_Tissue_Prediction - Jupyter NotebookAdipose_Tissue_Prediction - Jupyter NotebookVrushali Vishwasrao
- Daily Task 9 - Statistical Tests - Jupyter NotebookDaily Task 9 - Statistical Tests - Jupyter NotebookVrushali Vishwasrao
- Assignment Unit1Assignment Unit1Akhil Dayalu
- Sigma CalculatorSigma CalculatorMarcio Alexsandro Pereira dos Santos (Marcio Santos)
- arch03v37n1-5arch03v37n1-5Ahmed Fenneur
- assignment-06assignment-06GSM ZAMBIA
- QAM-III_Mukesh Mehlawat(F,G,H,I)QAM-III_Mukesh Mehlawat(F,G,H,I)206Ishu Gupta
- Mat Prob Pertemuan 4Mat Prob Pertemuan 4Andi Wijaya
- AMA1501 Exam12 Sem1 SolutionAMA1501 Exam12 Sem1 SolutionLee Tin Yan
- 2019 SA Feb-192019 SA Feb-19vishwas kumar
- 337 Simulation and Modelling337 Simulation and Modellinguseless007
- 02 Multicollinearity02 MulticollinearityGabriel Gheorghe
- How to Work and Verify Convolution Integral and Sum Problems - For DummiesHow to Work and Verify Convolution Integral and Sum Problems - For Dummiespsmaju
- Stats homeworkStats homeworkarsalanq_16
- Tensorflow PresentationTensorflow Presentationhail2pigdum
- 232901_1_HW4232901_1_HW4trucalling22
- trantranmarcelomcb
- Wipro SampleWipro SampleArghya Guha
- An Introduction to Stochastic Modeling, Student Solutions Manual (e-only)An Introduction to Stochastic Modeling, Student Solutions Manual (e-only)Mark Pinsky
- Amazing Java: Learn Java QuicklyAmazing Java: Learn Java QuicklyAndrei Besedin
- Best Man SpeechBest Man SpeechAdam Smith
- Tracking Moving Devices With the Cricket Location SystemTracking Moving Devices With the Cricket Location SystemAdam Smith
- NeighborWeb Biz PlanNeighborWeb Biz PlanAdam Smith
- The debate surrounding the success or failure of the Patriot PAC-2 missileThe debate surrounding the success or failure of the Patriot PAC-2 missileAdam Smith
- Cryptography RegulationsCryptography RegulationsAdam Smith
- Adam Smith - William Harveyâ€™s Methodological LegacyAdam Smith - William Harveyâ€™s Methodological LegacyAdam Smith
- Platoâ€™s Allegory â€“ Unclear or Unknown?Platoâ€™s Allegory â€“ Unclear or Unknown?Adam Smith
- Aristotle â€“ Questions and Pseudo-HypothesesAristotle â€“ Questions and Pseudo-HypothesesAdam Smith
- World War II and Nuclear WeaponsWorld War II and Nuclear WeaponsAdam Smith
- Preventing the First World WarPreventing the First World WarAdam Smith
- Metrics for Counter-Insurgency Campaigns and Their Application to Efforts in Iraq, Final Draft, Double SpacedMetrics for Counter-Insurgency Campaigns and Their Application to Efforts in Iraq, Final Draft, Double SpacedAdam Smith
- Friendship Prediction on FacebookFriendship Prediction on FacebookAdam Smith
- Transit AdvisorTransit AdvisorAdam Smith
- FromNowOn Web Archive SystemFromNowOn Web Archive SystemAdam Smith
- Yellow Bus: A Universal Data Bus for BED DevicesYellow Bus: A Universal Data Bus for BED DevicesAdam Smith
- Therac-25: Two Design and Testing ErrorsTherac-25: Two Design and Testing ErrorsAdam Smith
- Rollback: A Historical PerspectiveRollback: A Historical PerspectiveAdam Smith
- Iraq: A 20/20 Hindsight Spiral versus Deterrence Model Case StudyIraq: A 20/20 Hindsight Spiral versus Deterrence Model Case StudyAdam Smith
- The Principle US Response in the Cuban Missile CrisisThe Principle US Response in the Cuban Missile CrisisAdam Smith
- US War on TerrorUS War on TerrorAdam Smith
- Parallel Test - Q1 - GEN MATHParallel Test - Q1 - GEN MATHEmma Sarza
- Midterm_2014Midterm_2014Mahmoud Khaled
- Proof of Normal DistributionProof of Normal Distributionvc94
- Maths (English Medium) Tamil Nadu 12th Revision Question Paper 2020 - VIMaths (English Medium) Tamil Nadu 12th Revision Question Paper 2020 - VIEmmanuel Licardo Peralta
- SnS Lab 01-SolutionSnS Lab 01-SolutionShees Chaudhary
- UL_XM511_Final_Sample SolutionsUL_XM511_Final_Sample SolutionsKrish Patel1
- Properties of Exponential FunctionsProperties of Exponential FunctionsChristian Dy R. Regatcho
- On Formulae of Thom and WuOn Formulae of Thom and Wublexim
- Algebra and Trigonometry for EngineeringAlgebra and Trigonometry for EngineeringRayd Collin
- st.paul1920F4T1Paper2MSst.paul1920F4T1Paper2MS20/21-5B-(05) HoMeiYi/何美誼
- Concept ListConcept ListDiego Saul
- MCS Lab 1.pdfMCS Lab 1.pdfSumit Hassan Eshan
- ph curves faroukiph curves faroukiasdfghlkj
- Quantitative Methods TBQuantitative Methods TBbh19980
- Management Science QuizManagement Science QuizChristian Job Reniedo
- Recitation+9-SolutionsRecitation+9-SolutionsMagnefico
- Tutorial 2018 OptimizationTutorial 2018 Optimizationburhan
- Math9_Q2-Mod7Math9_Q2-Mod7Manelyn Taga
- New KroneckerNew KroneckerSundaravada
- JEE Main Sets Relations and Functions Important Questions (2022)JEE Main Sets Relations and Functions Important Questions (2022)shreenath gajalwar
- Chapter 06 - ALDEP Presentation.pptxChapter 06 - ALDEP Presentation.pptximran_chaudhry
- 200717957-Appt-Itude.doc200717957-Appt-Itude.dochansi
- The Presentation of a Genetic Algorithm to Solve Steiner TreeThe Presentation of a Genetic Algorithm to Solve Steiner TreeATS
- 1 Eutiquio C Young Partial Differential Equations an Introduction Allyn and Bacon 19721 Eutiquio C Young Partial Differential Equations an Introduction Allyn and Bacon 1972Jefferson
- 11_appliedmaths_sp_0411_appliedmaths_sp_04sagar Juneja
- EMT293_Chapter3EMT293_Chapter3Jazmi Mukhtar
- Wavelet Transforms That Map Integers to IntegersWavelet Transforms That Map Integers to IntegersPrasanna Ganesh
- codeforces-phase-2-2codeforces-phase-2-2Pawan Nani
- mathonline.wikidot.com-The Complex Exponential Functionmathonline.wikidot.com-The Complex Exponential Functionghtyping
- Robust AnalysisRobust Analysistomychalil