You are on page 1of 9

UNIVERSIDAD AUTONOMA DE NUEVO LEON

FACULTAD DE CONTADURÍA PÚBLICA Y


ADMINISTRACIÓN
LICENCIATURA EN TECNOLOGIAS DE INFORMACION
COMPRENSION DE TEXTOS TECNICOS EN INGLES
EVIDENCIA 3

GRUPO 21
MAESTRA: ROSA MARÍA CABALLERO CARRANZA

EQUIPO

Alemán Quintanilla Derek Leonardo 2052478


Barajas Trujillo Patricio 2001375
González Ayala Uriel 2061495
Moreno Vázquez Diego Alberto 1993795
Pérez Monzón Mónica Noemi 205957
TITLE: A Method to Improve the Accuracy of Personal Information Detection

AREA: Journal of Computers and Communications

CITATION: Chiu, C. , Yang, C. and Shieh, C. (2023) A Method to Improve the Accuracy of Personal Information Detection.
Journal of Computer and Communications, 11, 131-141. doi: 10.4236/jcc.2023.116010.

URL: https://www.scirp.org/journal/paperinformation.aspx?paperid=126160

PERSONAL The study "A Method to Improve the Accuracy of Personal Information Detection" provided valuable
COMMENTS (200- insights into the challenges and potential solutions for safeguarding personal data in the digital age. It
250 words) highlighted the importance of developing more effective methods for identifying and protecting
sensitive information, particularly in large datasets. As students pursuing a degree in computer science,
this study has broadened our understanding of the critical role that data privacy plays in the modern
• What did you le world. It has also introduced us to the potential of utilizing large language models, as ChatGPT, for
arn from the study? enhancing data protection measures. In the academic training, we intend to apply the knowledge
gained from this study to further explore the intersection of artificial intelligence and data privacy. We
• How might you believe that AI-powered tools can play a significant role in identifying and protecting sensitive
apply the informatio information, thereby reducing the risk of data breaches and safeguarding individual privacy. Moreover,
n in your this study aligns with our field of preparation, which focuses on developing secure and reliable software
academic training? systems. By understanding the challenges of personal information detection, We can contribute to the

• How this study development of software that prioritizes data privacy and protects user information from unauthorized
relateto your field access. In conclusion, the study has provided us with valuable insights into the importance of data
of preparation? privacy and the potential of AI-powered tools for enhancing data protection measures. We are
committed to applying this knowledge in our academic training and future career to contribute to a
more secure and privacy-aware digital environment.

SUMMARY -As more personal information is digitalized, there is a growing concern about data breaches and privacy
violations. These breaches have happened in various sectors, including government, businesses, and
INTRODUCTION (1-
individual proprietorships. They can have severe consequences for individuals, such as identity theft,
3 sentences)
financial fraud, and other harm.
• give a brief intro
duction to give the n
eccessary backgrou -To address the growing concern about data breaches and privacy violations, a study titled "A Method
nd to the study and to Improve the Accuracy of Personal Information Detection" was conducted. The study aimed to
state its purpose. develop a more effective approach for identifying personal data within large datasets, with the goal of
enhancing accuracy, efficiency, and ultimately reducing the risk of data breaches.
• Why was the stu
dy conducted?
• What was it abo -The study concentrated on enhancing the detection of personal data factors, such as domestic personal
ut? identity numbers and credit card numbers with checksums. These factors are frequently used to identify
individuals and are often targeted in data breaches. By improving the detection of these factors, the
study aimed to strengthen data protection measures and minimize the potential for privacy violations

The subjects of the study were not human participants but rather a large dataset of text documents
containing personal information.
PROCEDURES (3-
5 sentences)

• describe t The study measured the accuracy and efficiency of the proposed method for personal information
detection. Accuracy was assessed using precision and recall metrics, while efficiency was evaluated
• he specifics of w based on computation time.
hat this study in
volved

• Who were the s The proposed method was compared to conventional methods of personal information detection to
ubjects? determine its relative performance. The comparison focused on the accuracy and efficiency of each
method in identifying personal data within the text documents.
• What was meas
ured?

• What was being


compared?

FINDINGS(3- The study "A Method to Improve the Accuracy of Personal Information Detection" introduced a novel
5 sentences) approach that significantly enhanced the accuracy and efficiency of identifying personal data within
large datasets. Compared to conventional methods, the proposed method achieved a remarkable 98%
• Discuss the majo
accuracy and reduced computation time by 45%.
r findings and result
s.

• How useful or si This breakthrough holds immense value for strengthening data protection measures, reducing the risk
gnificant is this? of data breaches, and ensuring compliance with privacy regulations.

• What did the au


thor say about it?
The authors expressed optimism about the method's potential to address the growing concerns
surrounding data protection and privacy in the digital age
CONCLUSIONS (3- The researchers concluded that the proposed method for personal information detection holds
5 sentences) significant promise for improving data protection practices and safeguarding sensitive information. The
major outcomes of the study can be summarized as follows:
• Summarize the r
esearcher’s
conclusions.
Enhanced Accuracy: The proposed method achieved a remarkable accuracy of 98%, surpassing
• What was the m conventional methods by a considerable margin. This implies a significant reduction in false positives
ajor outcome of the and false negatives, ensuring a more precise identification of personal data.
study?

Improved Efficiency: The study demonstrated a remarkable improvement in efficiency, with the
proposed method reducing computation time by 45% compared to conventional approaches. This
translates to faster processing and analysis of large datasets, saving time and resources.

Versatility in Personal Data Detection: The method proved effective in identifying various personal data
factors, including domestic personal identity numbers and credit card numbers with checksums. This
versatility indicates its potential applicability to a wide range of personal data types.

PII (Personally Identifiable Information) (Información de Identificación Personal): Information that can
be used to identify or trace an individual, such as name, address, social security number, or credit card
number.
Lexicon (10 words)

Data Breach (Filtración de datos): An incident in which sensitive or confidential data is accessed or
stolen without authorization.

Privacy Compliance (Cumplimiento de privacidad): Adherence to data privacy regulations and laws that
protect individuals' personal information.

Data Protection ( Protección de Datos ) : The practice of safeguarding personal data from unauthorized
access, use, disclosure, modification, or destruction.

Data Security ( Seguridad de datos) : The measures and controls implemented to protect data from
unauthorized access, theft, or damage.
Anonymization (Anonimización) : The process of removing or obscuring personally identifiable
information from data to protect privacy.

Pseudonymization (Seudonimización): The process of replacing personally identifiable information with


artificial identifiers to protect privacy while maintaining data utility.

Data Masking ( Enmascaramiento de datos): The process of obscuring or replacing sensitive data with
non-sensitive data to protect privacy during testing or analysis.

Data Encryption ( Cifrado de datos) : The process of converting data into an unreadable format to
protect it from unauthorized access.

Data Governance ( Dato de governancia): The overall framework for managing, securing, and using data
responsibly within an organization

Student’s name: ARTICLE Nº: 1

Alemán Quintanilla Derek Leonardo 2052478

Barajas Trujillo Patricio 2001375

González Ayala Uriel 2061495

Moreno Vázquez Diego Alberto 1993795

Pérez Monzón Mónica Noemi 205957


TITLE: Adaptive Recurrent Iterative Updating Stereo Matching Network
AREA: Journal of Computer and Communications
CITATION: Qun Kong, Liye Zhang*, Zhuang Wang, Mingkai Qi, Yegang Li Journal of Computer and Communications >
Vol.11 No.3, March 2023 School of Computer Science and Technology, Shandong University of Technology,
Zibo, China. https://doi.org/10.4236/jcc.2023.113007

URL: https://www.scirp.org/journal/paperinformation.aspx?paperid=124029

PERSONAL COMMENTS (200-250 As students, this study on stereo matching networks provides several important
words) lessons. Firstly, it highlights the significance of generalization in machine learning. We
learn that relying solely on a single training dataset can lead to poor performance when
▪ What did you learn from the faced with different scenes. This underscores the need to ensure feature coherence
among matching pixels to enhance the network's generalization capability.
study?
Furthermore, the study proposes interesting solutions to address this issue. Introducing
▪ How might you apply the inf whitening loss in the stereo matching feature extraction module is an innovative
ormation in your strategy to constrain the variation among salient feature pixels. This makes us reflect
academic training? on the importance of considering additional techniques beyond the training dataset to
▪ How this study relate to your improve the applicability of machine learning models in different scenarios. The use of
field of preparation? an interactive GRU update module to expand the model's receptive field across
multiple resolutions is also noteworthy. This technique demonstrates how it is possible
to achieve accurate disparity estimation not only in texture-rich areas but also in low-
texture regions. As students, this inspires us to explore different approaches and
algorithms that can enhance the precision and performance of models in our field of
study. In terms of its relevance to our academic pursuits, this study is valuable in the
field of computer vision and image processing, which is one of our areas of interest.
Stereo matching is a fundamental problem in 3D perception, and understanding how to
improve disparity estimation through techniques like feature coherence and the use of
iterative update modules is crucial for developing more effective algorithms. It
motivates us to deepen our knowledge in this field and apply these concepts in our
research and future projects.
SUMMARY ▪ ▪ The study aimed to address the issue of poor generalization in stereo matching
INTRODUCTION (1-3 sentences) networks trained on a single dataset.
▪ give a brief introduction to gi ▪ The proposed approach introduced a whitening loss to enhance feature coherence
ve the neccessary backgrou and an iterative GRU update module for accurate disparity estimation.
nd to the study and state i ▪ The results showed improved disparity estimation, broader applicability, and
increased robustness compared to previous algorithms.
ts purpose.
▪ Why was the study conducte
d?
▪ What was it about?
▪ Involved training a stereo coincidence network using the large-scale data set from
PROCEDURES (3-5 sentences) Scene Flow. Two key approaches were introduced: loss whitening in the feature
▪ describe the specifics of w extraction module and the GRU iterative update module in the update calculation
hat this study involved stage. The model was tested on conventional data sets such as Middlebury, KITTI 2015,
▪ Who were the subjects? and ETH3D to compare its performance with previous
▪ Study subjects were data sets used in stereo matching, including Scene Flow,
▪ What was measured?
Middlebury, KITTI 2015, and ETH3D. These data sets represented different scenes and
▪ What was being compare conditions to evaluate the performance of the stereo matching network.
d? ▪ The disparity estimate, which is the offset difference between the stereo images, was
measured. This value was used as the primary measure to evaluate the accuracy and
performance of the proposed stereo matching network
▪ The performance of disparity estimation using the approach proposed in the article
was compared with previous stereo matching algorithms. The aim was to evaluate and
demonstrate the superiority of the new approach in terms of precision, applicability
and robustness in different data sets, such as Middlebury, KITTI 2015 and ETH3D.

FINDINGS(3-5 sentences) ▪ A more accurate disparity estimate was obtained compared to previous stereo
▪ Discuss the major findings and re matching algorithms. This means that the proposed approach improves the ability of
sults. the network to accurately estimate disparities in stereo images.
▪ How useful or significant i ▪ These findings are useful for improving quality and disparity in applications such as
3D reconstruction, object detection, and autonomous navigation. More accurate dating
s this?
and higher applicability allow for more reliable results across different scenarios and
▪ What did the author say a
data sets, improving the confidence and performance of stereo-matching-based
bout it? systems.
▪ States that the model proposed in the paper presents a broader adaptability in the
stereo matching task. This is achieved by incorporating a whitening loss modulus during
feature extraction, which improves the generalizability of the model by reducing the
variance of sensitive pixels in the feature domain.

CONCLUSIONS (3-5 sentences) He says that the experimental results demonstrate that the improved model shows
▪ Summarize the researcher’s good adaptability between data sets and can effectively feed the training results into
conclusions. other data sets through transfer learning. Furthermore, the proposed method reduces
▪ What was the major outcome the error matching rate and exhibits a level of robustness.
of the study?
1. Stereo (Estereo) a piece of electrical equipment for playing music, listening to the
radio, etc. that sounds very natural because the sounds come out of two speakers (=
Lexicon (10 words) parts for playing sound

2. Several (Varias) some; an amount that is not exact but is fewer than many

3. Applicability (Aplicabilidad) the fact of affecting or relating to a person or thing

4. Exhibits (Exhibiciones) to show something publicly

5. Vision (Vision)an idea or mental image of something

6. Knowledge (Conocimiento) understanding of or information about a subject that you


get by experience or study, either known by one person or by people generally

7. Poor (Pobre) having little money and/or few possessions

8. Whitening (Blanqueamiento) a substance that you put on sports shoes to make


them whiter and cleaner

9. Module (Modulo) one of a set of separate parts that, when combined, form a
complete whole

10. Algorithms (Algoritmos). a set of mathematical instructions or rules that, especially


if given to a computer, will help to calculate an answer to a problem
Student’s name: Alemán Quintanilla Derek Leonardo 2052478 ARTICLE Nº: 2
Barajas Trujillo Patricio 2001375
González Ayala Uriel 2061495
Moreno Vázquez Diego Alberto 1993795
Pérez Monzón Mónica Noemi 205957

BIBLIOGRAPHY
Articles - JCC - Scientific Research Publishing. (s. f.).
https://www.scirp.org/journal/journalarticles.aspx?journalid=2431

Chiu, C., Yang, C., & Shieh, C. (2023). A method to improve the accuracy
of personal information detection. Journal of computer and
communications, 11(06), 131-141. https://doi.org/10.4236/jcc.2023.116010
Kong, Q., Zhang, L., Wang, Z., Qi, M., & Li, Y. (2023). Adaptive recurrent
iterative updating stereo matching network. Journal of computer and
communications, 11(03), 83-98. https://doi.org/10.4236/jcc.2023.113007

You might also like