Affinity Workshop
LatinX in AI (LXAI) Workshop
Laura Montoya · Jose Gallego-Posada · Pablo Rivas · Vinicius Carida · Mateo Espinosa Zarlenga · Carlos Miranda · Andres Marquez · Ramesh Doddaiah · David Alvarez-Melis · Ivan Dario Arraut Guerrero · Mateo Guaman Castro · Ana Maria Quintero-Ossa · Fabian Latorre · Julio Hurtado · Jaime David Acevedo-Viloria · Miguel Felipe Arevalo-Castiblanco
Meeting Room 315
The LatinX in AI research workshop is a one-day event with invited speakers, oralpresentations, and posters. The event brings together faculty, graduate students, research scientists, and engineers for an opportunity to connect and exchange ideas. There will be a panel discussion and a mentoring session to discuss current research trends and career choices in artificial intelligence and machine learning, highlighting the unique challenges of LatinX identifying researchers. The workshop aims to create a platform for the work of Latinx researchers and we invite everyone to attend.
Schedule
Mon 11:45 a.m. - 12:00 p.m.
|
Check-in
(
Break
)
|
🔗 |
Mon 12:00 p.m. - 12:05 p.m.
|
Opening Remarks
(
Opening Remarks
)
SlidesLive Video |
🔗 |
Mon 12:05 p.m. - 12:45 p.m.
|
The Future of A.I. for the Future of Work
(
Keynote by Saiph Savage
)
SlidesLive Video Keynote by Saiph Savage The Future of A.I. for the Future of Work |
🔗 |
Mon 12:45 p.m. - 1:45 p.m.
|
Mentorship Hour
(
Panel
)
SlidesLive Video |
🔗 |
Mon 1:45 p.m. - 1:55 p.m.
|
Coffee and Networking
(
Break
)
|
🔗 |
Mon 1:55 p.m. - 2:10 p.m.
|
Everything you always wanted to know about applying to ML/AI Ph.D. programs but were afraid to ask
(
Tutorial by David Alvarez Melis
)
SlidesLive Video David Alvarez Melis Everything you always wanted to know about applying to ML/AI Ph.D. programs but were afraid to ask |
🔗 |
Mon 2:10 p.m. - 2:25 p.m.
|
1-Path-Norm Regularization of Deep Neural Networks
(
Oral
)
link
SlidesLive Video The so-called path-norm measure is considered one of the best indicators for good generalization of neural networks. This paper introduces a proximal gradient framework for the training of deep neural networks via 1-path-norm regularization, which is applicable to general deep architectures. We address the resulting nonconvex nonsmooth optimization model by transforming the intractable induced proximal operation to an equivalent differentiable proximal operation. We compare automatic differentiation (backpropagation) algorithms with the proximal gradient framework in numerical experiments on FashionMNIST and CIFAR10. We show that 1-path-norm regularization is a better choice than weight-decay for fully connected architectures, and it improves the robustness to the presence of noisy labels. In this latter setting, the proximal gradient methods have an advantage over automatic differentiation. |
Fabian Latorre · Antoine Bonnet · Paul Rolland · Nadav Hallak · Volkan Cevher 🔗 |
Mon 2:25 p.m. - 2:40 p.m.
|
Evaluating the impact of incorporating ’legalese’ definitions and abstractive summarization on the categorization of legal cases by their holdings
(
Oral
)
link
SlidesLive Video Legal text is difficult to understand and requires domain-specific knowledge to read. This work aims to investigate the effect that model stacking and input processing have on information fidelity with the motivation to explore possibilities of expanding the accessibility of legal texts. We developed a legal dictionary through the United States Courts’ Glossary of Legal Terms to map complex terms into simple English and used FLAN-T5 to summarize observations. To evaluate performance, we used binary text classification to predict case holdings using LLMs (Large Language Models) and evaluated the results with and without model pretraining. To assess information fidelity, we ask: "Does model stacking affect classification performance?" and "Does performance change with pretraining?" |
Shiu Tin Ivan Ko · Daniela Cortes Bermudez · Henry Han · Huiyun Zhang 🔗 |
Mon 2:40 p.m. - 3:20 p.m.
|
Visual Storytelling with Generative Models of Video
(
Keynote by Ruben Villegas
)
SlidesLive Video Keynote by Ruben Villegas Visual Storytelling with Generative Models of Video |
🔗 |
Mon 3:20 p.m. - 4:15 p.m.
|
Lunch and Networking
(
Break
)
|
🔗 |
Mon 4:15 p.m. - 4:55 p.m.
|
RényiTester, a Variational Approach to Testing Differential Privacy
(
Keynote by Monica Ribero
)
SlidesLive Video Keynote by Monica Ribero RényiTester, a Variational Approach to Testing Differential Privacy |
🔗 |
Mon 4:55 p.m. - 5:40 p.m.
|
Platinum Sponsors Panel
(
with Ulysses Rocha, Kavya Srinet, Shagun Sodhani, and Yao Qin
)
SlidesLive Video Platinum Sponsor Panel with Ulysses Rocha, Kavya Srinet, Shagun Sodhani and Yao Qin |
🔗 |
Mon 5:40 p.m. - 5:55 p.m.
|
Generative network-based reduced-order model for prediction, data assimilation and uncertainty quantification
(
Oral
)
link
SlidesLive Video We propose a new method in which a generative network (GN) integrate into a reduced-order model (ROM) framework is used to solve inverse problems for partial differential equations (PDE). The aim is to match available measurements and estimate the corresponding uncertainties associated with the states and parameters of a numerical physical simulation. The GN is trained using only unconditional simulations of the discretized PDE model. We compare the proposed method with the golden standard Markov chain Monte Carlo. We apply the proposed approaches to a spatio-temporal compartmental model in epidemiology. The results show that the proposed GN-based ROM can efficiently quantify uncertainty and accurately match the measurements and the golden standard, using only a few unconditional simulations of the full-order numerical PDE model. |
Vinicius L S Silva · Claire Heaney · Christopher Pain 🔗 |
Mon 5:55 p.m. - 6:10 p.m.
|
Omega: Optimistic EMA Gradients
(
Oral
)
link
SlidesLive Video Stochastic min-max optimization has gained interest in the machine learning community with the advancements in GANs and adversarial training. Although game optimization is fairly well understood in the deterministic setting, some issues persist in the stochastic regime. Recent work has shown that stochastic gradient descent-ascent methods such as the optimistic gradient are highly sensitive to noise or can fail to converge. Although alternative strategies exist, they can be prohibitively expensive. We introduce Omega, a method with optimistic-like updates that mitigates the impact of noise by incorporating an EMA of historic gradients in its update rule. We also explore a variation of this algorithm that incorporates momentum. Although we do not provide convergence guarantees, our experiments on stochastic games show that Omega outperforms the optimistic gradient method when applied to linear players. |
Juan Ramirez · Rohan Sukumaran · Quentin Bertrand · Gauthier Gidel 🔗 |
Mon 6:10 p.m. - 6:20 p.m.
|
Coffee and Networking
(
Break
)
|
🔗 |
Mon 6:20 p.m. - 7:00 p.m.
|
RLing in the Deep
(
Keynote by Pablo Samuel Castro
)
SlidesLive Video Keynote by Pablo Samuel Castro RLing in the Deep |
🔗 |
Mon 7:00 p.m. - 7:15 p.m.
|
Patent Process
(
Tutorial by Ramesh Doddaiah
)
SlidesLive Video Ramesh Doddaiah Patent Process |
🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Unleashing the Power of Twitter: A Data Analysis of the US Senate's Social Media Strategy using Unsupervised Machine Learning
(
Poster
)
link
Social media, such as Twitter, plays a crucial role in political discourse and communication. It is the window of voters to their candidates, and what senators publish may determine their success in the elections. A deep analysis is needed to comprehend the current situation and generate strategies to reach the audience. This paper joins the creation of a self-made dataset, using machine learning topic models, analyzing how geography influences the political landscape, and employing a proposed popularity metric to explain the current political landscape and provide insights about the most influential senators and their discourse. |
Miguel Cozar · Carlos Muñoz Losa · Kai Shu 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Expanded Convolutional Network for Tabular Data
(
Poster
)
link
Convolutional neural networks (CNNs) are widely recognized for their effectiveness in computer vision tasks, but their spatial information capturing ability does not directly apply to tabular datasets lacking spatial correlation. In this paper, a tailored approach called Expanded CNN (ExCNN) is proposed for tabular data analysis. Unlike common practices of transforming tabular data into images or using transformer architectures, ExCNN enhances feature dimensionality through a fully connected layer, harnessing the benefits of complex neural networks adapted to the tabular data domain. The performance of ExCNN is evaluated on various datasets, comparing it to existing architectures and benchmarking against Gradient Boosted Decision Trees. While no universally superior solution emerges, ExCNN demonstrates promise by leveraging the advantageous characteristics of CNNs for tabular data, outperforming certain deep learning architectures in specific metrics. |
edson luque 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Machine learning over the free-parameters of the Black-Scholes equation: Stock market and Option market
(
Poster
)
link
The Black-Scholes equation is famous for predictingvalues for the prices of Options inside thestock market scenario. However, it has the limitationof depending on the estimated value forthe volatility. On the other hand, several Machinelearning techniques have been employed for predictingthe values of the same quantity. In thispaper we analyze some fundamental properties ofthe Black-Scholes equation and we then proposea way to train its free-parameters, the volatilityin particular. This with the purpose of using thisparameter as the fundamental one to be learnedby a Machine Learning system and then improvethe predictions in the stock market. |
JORGE ARRAUT · Ivan Arraut · Ka I Lei 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Gabor Filters as Initializers for Convolutional Neural Networks: A Study on Inductive Bias and Performance on Image Classification
(
Poster
)
link
This study explores the impact of Gabor filters on Convolutional Neural Networks (CNNs) performance in image classification tasks. Prior research has indicated that the receptive filters of CNNs often resemble Gabor filters, suggesting their potential as initial receptive filters. We conducted an extensive analysis on various general object datasets, demonstrating that integrating Gabor filters in the receptive layer enhances CNN performance, as evidenced by improved accuracy, higher Area Under the Curve (AUC), and reduced loss. Furthermore, our findings suggest that CNNs equipped with Gabor filters in the receptive layer can perform better in a shorter training period than traditional random initialization techniques. |
Pablo Rivas · Mehang Rai 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Evaluating GPT-3.5 and GPT-4 on Grammatical Error Correction for Brazilian Portuguese
(
Poster
)
link
We investigate the effectiveness of GPT-3.5 and GPT-4, two large language models, as Grammatical Error Correction (GEC) tools for Brazilian Portuguese and compare their performance against Microsoft Word and Google Docs. We introduce a GEC dataset for Brazilian Portuguese with four categories: Grammar, Spelling, Internet, and Fast typing. Our results show that while GPT-4 has higher recall than other methods, LLMs tend to have lower precision, leading to overcorrection. This study demonstrates the potential of LLMs as practical GEC tools for Brazilian Portuguese and encourages further exploration of LLMs for non-English languages and other educational settings. |
Maria Penteado · Fabio Perez 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
The Forward-Forward Algorithm as a feature extractor for skin lesion classification: A preliminary study.
(
Poster
)
link
Skin cancer, a deadly form of cancer, exhibits a 23% survival rate in the USA with late diagnosis. Early detection significantly increases the survival rate to approximately 99%, facilitating timely treatment. Accurate biomedical image classification is vital in medical analysis, aiding clinicians in disease diagnosis and treatment. Deep learning (DL) techniques, including backpropagation, convolutional neural networks, and transformers, have revolutionized clinical decision-making automation. However, computational cost and hardware constraints limit the implementation of state-of-the-art DL architectures. To address these limitations, we adopted the Forward-Forward Algorithm (FFA) as a computationally efficient alternative for optimizing DL training processes for skin lesion classification. While FFA results do not surpass traditional mechanisms like backpropagation, the trade-off between computational cost and performance becomes relevant in resource-limited clinical setups. We conduct a preliminary analysis of FFA for skin lesion classification, comparing it with backpropagation and exploring their combined use during training. |
Abel Reyes-Angulo · Sidike Paheding 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Identifiability of Discretized Latent Coordinate Systems via Density Landmarks Detection
(
Poster
)
link
Disentanglement aims to recover meaningful latent ground-truth factors from only the observed distribution.Identifiability provides the theoretical grounding for disentanglement to be well-founded. Unfortunately, unsupervised identifiability of independent latent factors is a theoretically proven impossibility in the i.i.d. setting under a general nonlinear smooth map from factors to observations. In this work, we show that, remarkably, it is possible to recover discretized latent coordinates under the most general smooth mapping (diffeomorphism) without any additional inductive bias on the mapping. This is, provided the latent density has axis-aligned discontinuity landmarks, but without making the unrealistic assumption of statistical independence of the factors. We introduce this novel form of identifiability and provide a comprehensive proof of the recovery of discretized coordinates. |
Vitória Barin-Pacela · Kartik Ahuja · Simon Lacoste-Julien · Pascal Vincent 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
An Agent-Search Strategy for Contrast Enhancement in Medical Images
(
Poster
)
link
Image Contrast Enhancement (ICE) is one of the most used techniques in image processing for image quality improvement at preprocessing stages; However, common ICE implementations like Histogram Equalization (HE) or Contrast-Limited Adaptive Histogram Equalization (CLAHE) often introduce noise or result in information loss, which negatively impacts overall image processing. This paper shows an implementation of recently introduced technique Agent-Search-based ICE (AS-ICE) which effectively reduces these undesired side effects. A comparative study shows how AS-ICE outperforms CLAHE in multiple image quality criteria. Furthermore, original images and images enhanced with CLAHE and AS-ICE are fed into a CV system for tuberculosis diagnosis, with AS-ICE images leading to the most accurate results, getting an improvement by reduce error classification from 28% to 0%. |
Nayeli Areli Perez Padilla 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Convolutional neural network regression to estimate the mass parameter of astrophysical binary black hole systems
(
Poster
)
link
In this paper we propose the use of a deep learning based model for inferring astrophysical information of binary black hole (BBH) systems from observed gravitational wave (GW) signals. We focused in estimating the total mass of BBH systems $M_{total}$ using a convolutional neural network regression (CNNR) model. We built a large dataset of 2D images representing the time-frequency evolution of BBH GW signals which are embedded in noise, where for each generated image the real total mass is known. $M_{total} \in [10, 200] M_\odot$. A hold-out cross-validation procedure was performed to train and evaluate five architectures of CNNR models with different number and sizes of kernels. The results indicate that the proposed deep neural network models for regression provide reliable point-parameter estimations with high accuracy. This estimation parameter approach can be easily extended to reconstruct more parameters from astrophysical sources directly from obseved GW events.
|
Andres Antelis · Claudia Moreno 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Quadtree features for machine learning on CMDs
(
Poster
)
link
The upcoming facilities like the Vera C. Rubin Observatory will provide extremely deep photometry of thousands of star clusters to the edge of the Galaxy and beyond, which will require adequate tools for automatic analysis, capable of performing tasks such as the characterization of a star cluster through the analysis of color-magnitude diagrams (CMDs). The latter are essentially point clouds in N-dimensional space, with the number of dimensions corresponding to the photometric bands employed. In this context, machine learning techniques suitable for tabular data are not immediately applicable to CMDs because the number of stars included in a given CMD is variable, and equivariance for permutations is required. To address this issue without introducing ad-hoc manipulations that would require human oversight, here we present a new CMD featurization procedure that summarizes a CMD by means of a quadtree-like structure through iterative partitions of the color-magnitude plane, extracting a fixed number of meaningful features of the relevantsubregion from any given CMD. The present approach is robust to photometric noise and contamination and it shows that with a simple linear regression on our features predicts distance modulus (metallicity) with a scatter of 0.33 dex (0.16 dex) in cross-validation. |
Jose Schiappacasse · Sara Lucatello · Mario Pasquato 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Generative network-based reduced-order model for prediction, data assimilation and uncertainty quantification
(
Poster
)
link
We propose a new method in which a generative network (GN) integrate into a reduced-order model (ROM) framework is used to solve inverse problems for partial differential equations (PDE). The aim is to match available measurements and estimate the corresponding uncertainties associated with the states and parameters of a numerical physical simulation. The GN is trained using only unconditional simulations of the discretized PDE model. We compare the proposed method with the golden standard Markov chain Monte Carlo. We apply the proposed approaches to a spatio-temporal compartmental model in epidemiology. The results show that the proposed GN-based ROM can efficiently quantify uncertainty and accurately match the measurements and the golden standard, using only a few unconditional simulations of the full-order numerical PDE model. |
Vinicius L S Silva · Claire Heaney · Christopher Pain 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
1-Path-Norm Regularization of Deep Neural Networks
(
Poster
)
link
The so-called path-norm measure is considered one of the best indicators for good generalization of neural networks. This paper introduces a proximal gradient framework for the training of deep neural networks via 1-path-norm regularization, which is applicable to general deep architectures. We address the resulting nonconvex nonsmooth optimization model by transforming the intractable induced proximal operation to an equivalent differentiable proximal operation. We compare automatic differentiation (backpropagation) algorithms with the proximal gradient framework in numerical experiments on FashionMNIST and CIFAR10. We show that 1-path-norm regularization is a better choice than weight-decay for fully connected architectures, and it improves the robustness to the presence of noisy labels. In this latter setting, the proximal gradient methods have an advantage over automatic differentiation. |
Fabian Latorre · Antoine Bonnet · Paul Rolland · Nadav Hallak · Volkan Cevher 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Evaluating the impact of incorporating ’legalese’ definitions and abstractive summarization on the categorization of legal cases by their holdings
(
Poster
)
link
Legal text is difficult to understand and requires domain-specific knowledge to read. This work aims to investigate the effect that model stacking and input processing have on information fidelity with the motivation to explore possibilities of expanding the accessibility of legal texts. We developed a legal dictionary through the United States Courts’ Glossary of Legal Terms to map complex terms into simple English and used FLAN-T5 to summarize observations. To evaluate performance, we used binary text classification to predict case holdings using LLMs (Large Language Models) and evaluated the results with and without model pretraining. To assess information fidelity, we ask: "Does model stacking affect classification performance?" and "Does performance change with pretraining?" |
Shiu Tin Ivan Ko · Daniela Cortes Bermudez · Henry Han · Huiyun Zhang 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
An Empirical Analysis Towards Replacing Vocabulary-Rigid Embeddings by a Vocabulary-Free Mechanism
(
Poster
)
link
This paper addresses the limitations of subword based models in NLP by aligning the word embedding layer of a vocabulary-rigid transformer model to a vocabulary-free one. In order to do so, a CNN is trained to mimic the word embeddings layer of a BERT model, using a sequence of byte tokens as input. The study compares cosine-based and Euclidean-based loss functions for training the student network and finds better results with cosine-based metrics. The research contributes techniques for re-training transformer embedding layers and provides insights into loss function selection. The findings have implications for developing flexible and robust NLP models. |
Alejandro Rodriguez Perez · Korn Sooksatra · Pablo Rivas · Ernesto Quevedo Caballero · Javier Turek · Gisela Bichler · Tomas Cerny · Laurie Giddens · Stacie Petter 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Studying Generalization on Memory-Based Methods in Continual Learning
(
Poster
)
link
One of the objectives of Continual Learning is to learn new concepts continually over a stream of experiences and at the same time avoid catastrophic forgetting. To mitigate complete knowledge overwriting, memory-based methods store a percentage of previous data distributions to be used during training. Although these methods produce good results, few studies have tested their out-of-distribution generalization properties, as well as whether these methods overfit the replay memory. In this work, we show that although these methods can help in traditional in-distribution generalization, they can strongly impair out-of-distribution generalization by learning spurious features and correlations. Using a controlled environment, using the Synbol benchmark generator (Lacoste et al., 2020), we demonstrate that this lack of out-of-distribution generalization mainly occurs in the linear classifier. |
Felipe del Rio · Julio Hurtado · Cristian Calderon · Alvaro Soto · Vincenzo Lomonaco 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Recognition of grammatical classes of imagined speech words using a convolutional neural network and brain signals
(
Poster
)
link
In this paper, we analyze in time domain the signals acquired with 32 electroencephalography (EEG) channels from 10 healthy participants obtained during the imagined speech task of words in Spanish. We performed a statistical test to determine the location in space and time of the differences produced by imagining words from two grammatical classes: decision adverbs and nouns. Based on the statistical test results and using the EEGNet convolutional neural network, we evaluated three different data window sizes for the classification of the two grammatical groups. In the larger window W1 (700ms), we obtained an accuracy of 60.1%, while in the smaller window W3 (200ms), the accuracy obtained was 69.5%. This work is a first approach for the decoding of imagined speech words that are intended to be implemented in a brain-machine interface focused on patients with amyotrophic lateral sclerosis. |
Denise Magdalena Alonso Vazquez · Tonatiuh Hernández-Del-Toro · Omar Mendoza-Montoya · Ricardo Caraza · Hector Martinez · Carlos Reyes-Garcia · Javier M. Antelis 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Terrain Classification Enhanced with Uncertainty for Space Exploration Robots from Proprioceptive Data
(
Poster
)
link
Terrain Classification is an essential task in space exploration, where unpredictable environments are difficult to observe using only exteroceptive sensors such as vision. Implementing Neural Network classifiers can have high performance but can be deemed untrustworthy as they lack transparency, which makes them unreliable for taking high-stakes decisions during mission planning. We address this by proposing Neural Networks with Uncertainty Quantification in Terrain Classification. We enable our Neural Networks with Monte Carlo Dropout, DropConnect, and Flipout in time series-capable architectures using only proprioceptive data as input. We use Bayesian Optimization with Hyperband for efficient hyperparameter optimization to find optimal models for trustworthy terrain classification. |
Mariela De Lucas Alvarez · Jichen Guo · Raul Dominguez · Matias Valdenegro-Toro 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Is ReLU Adversarially Robust?
(
Poster
)
link
The efficacy of deep learning models has been called into question by the presence of adversarial examples. Addressing the vulnerability of deep learning models to adversarial examples is crucial for ensuring their continued development and deployment. In this work, we focus on the role of rectified linear unit (ReLU) activation functions in the generation of adversarial examples. ReLU functions are commonly used in deep learning models because they facilitate the training process. However, our empirical analysis demonstrates that ReLU functions are not robust against adversarial examples. We propose a modified version of the ReLU function, which improves robustness against adversarial examples. Our results are supported by an experiment, which confirms the effectiveness of our proposed modification. Additionally, we demonstrate that applying adversarial training to our customized model further enhances its robustness compared to a general model. |
Korn Sooksatra · Greg Hamerly · Pablo Rivas 🔗 |
Mon 7:15 p.m. - 7:50 p.m.
|
Omega: Optimistic EMA Gradients
(
Poster
)
link
Stochastic min-max optimization has gained interest in the machine learning community with the advancements in GANs and adversarial training. Although game optimization is fairly well understood in the deterministic setting, some issues persist in the stochastic regime. Recent work has shown that stochastic gradient descent-ascent methods such as the optimistic gradient are highly sensitive to noise or can fail to converge. Although alternative strategies exist, they can be prohibitively expensive. We introduce Omega, a method with optimistic-like updates that mitigates the impact of noise by incorporating an EMA of historic gradients in its update rule. We also explore a variation of this algorithm that incorporates momentum. Although we do not provide convergence guarantees, our experiments on stochastic games show that Omega outperforms the optimistic gradient method when applied to linear players. |
Juan Ramirez · Rohan Sukumaran · Quentin Bertrand · Gauthier Gidel 🔗 |
Mon 7:50 p.m. - 8:00 p.m.
|
Closing Remarks
(
Closing Remarks
)
SlidesLive Video |
🔗 |