Computer science is a discipline that spans theory and practice based on mathematics and physics. Computer scientists must be experienced in modeling and analyzing problems. With Computer Science, you can create anything and do anything :D.
Following the success of LegalAnalytics, we implement a generative pre-trained transformers (GPT) -based models to provide explanations and justifications for all applied themes in legal documents. This project aims to explain judiciary sentences and reduce the number of pending cases of the Brazilian Supreme Court (STF).
Specific objectives:
- Development of Platform to analise legal documents using language models.
- Implementation of interactive tool to understand legal sentences.
- LLM-based explanation method to analise the content of legal documents.
Implement and development of a web-based system to explore judiciary legal documents (textual information) using NLP-based models to associate Themes corresponding to a "General Repercussion". This project aims to infer judiciary sentences and reduce the number of pending cases of the Brazilian Supreme Court (STF).
Specific objectives:
- Development of Platform to analise legal documents.
- Implementation of high acurate models to infer legal process sentences.
- NLP explanation method to analise the inference of sentences.
Implementation and development of a framework for Detection of illegal timber operations in the Amazon Rainforest. This work relies on integrating two major Brazilian databases (Sinaflor and Sisflora). The main goal is to infer and create a graph-model-based wood product transportation/exportation between Brazilian companies.
Specific objectives:
- Web-based system to explore deforestation timber in Brazil.
- Implementation of high acurate models to infer timber chains between companies.
- Markov chain explanation method to analise the inference of chains.
Implement and development of a web-based system to explore Urban Perception in street images, using Deep Learning models to associate safe perception with perceptual scores. This project aims to explain why and how a model learns to percept.
Specific objectives:
- Web-based system to explore urban perception.
- Train a deep learning model to associate perceptions with perceptual scores.
- Deep Learning Explanation method to study the behavior of the predictor.
Design, implementation, and development of a transversal, flexible, and scalable ICT infrastructure based on Fog Computing Technology allows the greater abstraction of data coming from mobile devices and sensors modules.
Specific objectives:
- Development of libraries on the Web Server.
- Implementation of a concurrent and scalable database.
- Recognition and sending / receiving of data from the sensors to the server.
To explore large graphs, we implement a visual analytic tool combining interactive visualizations and computational techniques. Users can navigate between nodes and edges, analyze and highlight similarity between node information, and apply dynamic filtering and zooming.
[code]
To explore large graphs of fishing activity, we implement a visual tool combining visualizations and graph-based computational techniques. Users can navigate, track, count, and detect outliers in the fishing activity of different companies from different time-frames.
[code]
To explore the urban perception in street view imagery, it is possible to analyze the visual appearance and the objects within image. We use explanation methods and segmentation models to study and quantify the impact of each object in the urban safety perception.
[code]
To estimate some unknown parameters, we use the Block Gibbs Sampling method to keep some properties like ergodic, mean, variance, etc. Also, we analyze the trade-off bias-variance after sample distribution variables.
[code]
To store and sort multi dimensional spatial data we use a R-Tree data structure to simulate using web server and landing page to analyse and test R-Tree data structure based on spatial data search and store.
[code]
Search engine implementation using Inverted Hash tables as data structure, Radix tree to fast querys and some encode-decode algorithms.
[code]
To perform camera calibration we use OpenCV and some heuristics to detect, follow, and sort ellipses patterns with the refinement process of Ankur to improve and get better results of distortion matrix .
[code]