This action might not be possible to undo. Are you sure you want to continue?
IEEE PROJECTS & SOFTWARE DEVELOPMENTS IEEE FINAL YEAR PROJECTS|IEEE ENGINEERING PROJECTS|IEEE STUDENTS PROJECTS|IEEE BULK PROJECTS|BE/BTECH/ME/MTECH/MS/MCA PROJECTS|CSE/IT/ECE/EEE PROJECTS CELL: +91 98495 39085, +91 99662 35788, +91 98495 57908, +91 97014 40401 Visit: www.finalyearprojects.org Mail to:firstname.lastname@example.org
QoS Ranking Prediction for Cloud Services
ABSTRACT: Cloud computing is becoming popular. Building high-quality cloud applications is a critical research problem. QoS rankings provide valuable information for making optimal cloud service selection from a set of functionally equivalent service candidates. To obtain QoS values, real-world invocations on the service candidates are usually required to avoid the time-consuming and expensive real-world services.
This paper proposes a QoS ranking prediction framework for cloud services by taking advantage of the past service usage experiences of other consumers. Our proposed framework requires no additional invocations of cloud services when making QoS ranking prediction. We approach of personalized cloud service QoS ranking is to evaluate all the candidate services at the user-side and rank the services based on the observed QoS values. However, this approach is impractical in reality, since invocations of cloud services may be charged.
Even if the invocations are free, executing a large number of service invocations is time consuming and resource consuming, and some service invocations may produce irreversible effects in the real world. Moreover, when the number of candidate services is large, it is difficult for the cloud application designers to evaluate all the cloud services efficiently.
Existing technologies used for recommender systems fall in either of the following content-based filtering versus collaborative filtering. the degree of preference is represented by a rating score. Given a database of users’ past ratings on a set of items. In most systems. In contrast. traditional collaborative filtering algorithms are based on predicting the potential ratings that a user would assign to the unrated items so that they can be ranked by the predicted ratings to produce a list of recommended items.EXISTING SYSTEM: Existing system must be able to suggest items that are likely to be preferred by the user. . To recommend new items to a user. the collaborative filtering (CF) approach does not require any content information about the items. The CF approach is based on the assumption that a user would usually be interested in those items preferred by other users with similar interests. content-based filters match their representations to those items known to be of interest to the user. user profiles etc. in order to represent users and items using a set of features. it works by collecting ratings on the items by a large number of users and make recommendations to a user based on the preference patterns of other users. Content-based filtering approach analyzes the content information associated with the items and users such as product descriptions.
CloudRank framework. Extensive real-world experiments are conducted to study the ranking prediction accuracy of our ranking prediction algorithms compared with other competing ranking algorithms. and 2) the QoS values collected by monitoring cloud services. a set of similar users can be identified. . This paper identifies the critical problem of personalized QoS ranking for cloud services and proposes a QoS ranking prediction framework to address the problem. .. We publicly release our service QoS data set1 for future research. Second.e. CloudRank1 and CloudRank2) to make personalized service ranking by taking advantages of the past service usage experiences of similar users. The training data in the Cloud Rank framework can be obtained from: 1) the QoS values provided by other users. After that. . there are several modules. to predict the QoS ranking of a set of cloud services without requiring additional real-world service invocations from the intended users. two algorithms are proposed (i. CloudRank is the first personalized QoS ranking prediction framework for cloud services.PROPOSED SYSTEM: We propose a personalized ranking prediction framework. based on the similarity values. which makes our experiments reproducible. To the best of our knowledge. First. similarities between the active user and training users can be calculated. Finally. The experimental results show the effectiveness of our approach. named Cloud Rank. the ranking prediction results are provided to the active user. based on the user-provided QoS values. Our approach takes advantage of the past usage experiences of other users for making personalized ranking prediction for the current user.
SYSTEM ARCHITECTURE: .
IMPLEMENTATION: We present two QoS ranking prediction algorithms. named CloudRank1 and CloudRank2: .
. utilizing content information. and time. we will conduct more investigations on the correlations and combinations of different QoS properties. cloud services. FUTURE WORK: We would like to improve the ranking accuracy of our approaches by exploiting additional techniques (e. Experimental results show that our approaches outperform existing rating-based approaches and the traditional greedy method. data smoothing. When a user has multiple invocations of a cloud service at different time. our ranking approach identifies and aggregates the preferences between pairs of services to produce a ranking of services.). . so that the users can obtain QoS ranking prediction as well as detailed QoS value prediction.g. Our current approaches only rank different QoS properties independently. By taking advantage of the past usage experiences of other users.CONCLUSION: We propose a personalized QoS ranking prediction framework for cloud services. we will explore time-aware QoS ranking prediction approaches for cloud services by employing information of service users. We propose two ranking prediction algorithms for computing the service ranking based on the cloud application designer’s preferences. which requires no additional service invocations when making QoS ranking. matrix factorization. etc. We will also investigate the combination of rating-based approaches and ranking-based approaches. random walk.
R. Breese. 22. Cohen. pp. 931-945. D. 6. 243-270. 530-538. Burke.” Proc. 2004. Prodan. Schapire. Kekalainen.  A. Chai. R. arvelin and J. Festa. Konwinski. Ostermann. 22.E. Research and Development in Information Retrieval (SIGIR ’04). Armbrust. vol. A. and L. Berkeley. 4. 337-344.Y. 331-370. 10.  J. Rabkin. 4.REFERENCES:  M.” Proc. pp. pp. Joseph. and M. pp. June 2011. Stoica. T.” IEEE Trans. Patterson. and Y. 1. “Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing.” Technical Report EECS-2009-28. 2002.A. 27th Int’l ACM SIGIR Conf. Deshpande and G. R. California. 2009. pp. “Cumulated Gain-Based Evaluation of IR Techniques. no. Parallel Distributed System.J.” ACM Trans. 1998. Katz.  P. no. Information Systems. Iosup. 143-177.” User Modeling and User-Adapted Interaction. vol. vol. G. 43-52.H. “On Optimal Service Selection. Fahringer. 14th Ann. no. J. Griffith.  M. A. Singer. Yigitbasi. Heckerman. no. 1999.  R. pp. Jin.  W.” ACM Trans. “Hybrid Recommender Systems: Survey and Experiments.A. . no. “Empirical Analysis of Predictive Algorithms for Collaborative Filtering. World Wide Web (WWW ’05). Lee. Uncertainty in Artificial Intelligence (UAI ’98). and D. 12. D. R. and C. Si. A. 1. A. vol. N. Karypis. Artificial Intelligent Research. Epema. vol.  K. “An Automatic Weighting Scheme for Collaborative Filtering.D.W. Information System.” J. Fox. 422-446. pp. 20. 2002. S. I. pp. Zaharia. 14th Int’l Conf. Bonatti and P. 2004. Kadie.” Proc.  R. “Item-Based Top-n Recommendation. “Above the Clouds: A Berkeley View of Cloud Computing. “Learning to order things.S. 2005. Univ.Conf.
23. no. Smith./Feb. Marden. B.N. 936-943. 1995.  P. Lyu. no.” Proc. Research and Development in Information Retrieval (SIGIR ’07). J. Khazaei. H. York.” Proc. pp. vol. King. 175-186. “Eigenrank: A Ranking-Oriented Approach to Collaborative Filtering. “Amazon. 1994. vol. M. and M. “Effective Missing Data Prediction for Collaborative Filtering.B. 1. Misic.  G. P. 7. pp. Liu and Q. May 2012. 5. Analyzing and Modeling Ranking Data. pp. N.  N. and V. 2008. “Performance Analysis of Cloud Computing Centers Using m/g/m/m+r Queuing Systems. I. Resnick.R.  J. Computer Supported Cooperative Work. Iacovou. ACM Conf. “Grouplens: An Open Architecture for Collaborative Filtering of Netnews. Bergstrom. Linden. Chapman & Hall. 39-46. . Suchak. Parallel Distributed System. pp. 2003. 2007. 30th Int’l ACM SIGIR Conf. 31st Int’l ACM SIGIR Conf. Misic. 83-90. Ma. Yang. and J. 76-80. Jan.” IEEE Trans. pp.” IEEE Internet Computing. Research and Development in Information Retrieval (SIGIR ’08). Riedl.  H.” Proc. and J.Com Recommendations: Item-to-Item Collaborative Filtering.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.