<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Bo's Collection]]></title><description><![CDATA[bobobobobobobobobobobobobobobo]]></description><link>https://collections.bobobobobobo.net/</link><generator>Ghost 5.76</generator><lastBuildDate>Fri, 08 May 2026 12:07:53 GMT</lastBuildDate><atom:link href="https://collections.bobobobobobo.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Optimizing Multi-Touch Textile and Tactile Skin Sensing Through Circuit Parameter Estimation]]></title><description><![CDATA[<p>In this work, we explore Multi-Touch Textile and Tactile Skin Sensing. Due to the flexible and stretchable nature of textile and tactile skins, which can lead to noisy and dynamic pressure signals, this work adopts an optimization approach. It focuses on finding the most likely pressure distribution across the entire</p>]]></description><link>https://collections.bobobobobobo.net/optimizing-multi-touch-textile-and-tactile-skin-sensing-through-circuit-parameter-estimation/</link><guid isPermaLink="false">65acb4f0ea4edd0001ffe1f7</guid><dc:creator><![CDATA[Bo]]></dc:creator><pubDate>Tue, 02 Jan 2024 21:01:00 GMT</pubDate><media:content url="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-1.10.08-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-1.10.08-AM.png" alt="Optimizing Multi-Touch Textile and Tactile Skin Sensing Through Circuit Parameter Estimation"><p>In this work, we explore Multi-Touch Textile and Tactile Skin Sensing. Due to the flexible and stretchable nature of textile and tactile skins, which can lead to noisy and dynamic pressure signals, this work adopts an optimization approach. It focuses on finding the most likely pressure distribution across the entire skin, considering the underlying circuit topology. This method aims to enhance accuracy and reliability in interpreting pressure data, a crucial aspect in the effective application of tactile skins for human-robot interaction.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/AO4oxlnJs0I?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Optimizing Multi-Touch Textile and Tactile Skin Sensing Through Circuit Parameter Estimation"></iframe></figure><p><strong>Why this work may be useful</strong></p><p>This work enhances the pressure sensing accuracy of textile and tactile skin, effectively reducing false signals. Its methodology and findings are not only applicable to textiles but can also be generalized to other resistance-based pressure sensing technologies.</p><p><strong>Why this work may not be useful</strong></p><p>Developing a more advanced and complex circuit or knit topology for the tactile skin could significantly enhance the quality of the signals it produces. Such improvements might render the methods outlined in this work less critical for achieving precise pressure sensing, as the improved design itself could inherently provide more accurate and reliable data.</p><p><strong>Verdict: Useful for current skin manufacture process, but may be no longer the case when the technology improves</strong></p>]]></content:encoded></item><item><title><![CDATA[Signed Distance Field (SDF) using FPGA]]></title><description><![CDATA[<p>In this work, we focused on generating Signed Distance Fields (SDF) from convex objects using Xilinx FPGA. Signed Distance Fields (SDF) calculations are common in robot motion planning, obstacle avoidance, and computer graphics. </p><div class="kg-card kg-file-card"><a class="kg-file-card-container" href="https://collections.bobobobobobo.net/content/files/2024/01/S_boyings_report_presentation.pdf" title="Download" download><div class="kg-file-card-contents"><div class="kg-file-card-title">S_boyings_report_presentation</div><div class="kg-file-card-caption"></div><div class="kg-file-card-metadata"><div class="kg-file-card-filename">S_boyings_report_presentation.pdf</div><div class="kg-file-card-filesize">2 MB</div></div></div><div class="kg-file-card-icon"><svg viewbox="0 0 24 24"><defs><style>.a{fill:none;stroke:currentColor;stroke-linecap:</style></defs></svg></div></a></div>]]></description><link>https://collections.bobobobobobo.net/signed-distance-field-sdf-using-fpga/</link><guid isPermaLink="false">65acb5a6ea4edd0001ffe1fe</guid><dc:creator><![CDATA[Bo]]></dc:creator><pubDate>Fri, 08 Dec 2023 20:31:00 GMT</pubDate><media:content url="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-1.12.06-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-1.12.06-AM.png" alt="Signed Distance Field (SDF) using FPGA"><p>In this work, we focused on generating Signed Distance Fields (SDF) from convex objects using Xilinx FPGA. Signed Distance Fields (SDF) calculations are common in robot motion planning, obstacle avoidance, and computer graphics. </p><div class="kg-card kg-file-card"><a class="kg-file-card-container" href="https://collections.bobobobobobo.net/content/files/2024/01/S_boyings_report_presentation.pdf" title="Download" download><div class="kg-file-card-contents"><div class="kg-file-card-title">S_boyings_report_presentation</div><div class="kg-file-card-caption"></div><div class="kg-file-card-metadata"><div class="kg-file-card-filename">S_boyings_report_presentation.pdf</div><div class="kg-file-card-filesize">2 MB</div></div></div><div class="kg-file-card-icon"><svg viewbox="0 0 24 24"><defs><style>.a{fill:none;stroke:currentColor;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5px;}</style></defs><title>download-circle</title><polyline class="a" points="8.25 14.25 12 18 15.75 14.25"/><line class="a" x1="12" y1="6.75" x2="12" y2="18"/><circle class="a" cx="12" cy="12" r="11.25"/></svg></div></a></div><p><strong>Signed Distance Field</strong></p><p>Signed Distance Fields (SDF) represents the distance from a given point in space to the nearest surface of a shape or object. The &quot;signed&quot; aspect indicates that this distance is positive when the point is outside the object and negative when it is inside. This allows the SDF to convey not just the proximity to an object&apos;s surface, but also the positional relationship of the point relative to the object&apos;s boundary. This technique is useful in various applications like rendering, collision detection, and simulating physical phenomena.</p><p><strong>The CPU Approach</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-3.01.35-PM.png" class="kg-image" alt="Signed Distance Field (SDF) using FPGA" loading="lazy" width="1934" height="904" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-21-at-3.01.35-PM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-21-at-3.01.35-PM.png 1000w, https://collections.bobobobobobo.net/content/images/size/w1600/2024/01/Screenshot-2024-01-21-at-3.01.35-PM.png 1600w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-3.01.35-PM.png 1934w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Two approaches of calculating Signed Distance Field</span></figcaption></figure><ol><li><strong>Naive Solution</strong>: Calculates the distance from a point to every surface point of an object and then selecting the minimum distance. This method is simple to implement but can be computationally expensive, especially for complex objects or high-resolution fields.</li><li><strong>Hill Climb Solution</strong>: The hill climb algorithm starts with an initial solution and iteratively makes small changes, each time moving towards a solution that is closer to the object&apos;s surface. The process continues until no further improvements can be made or a certain condition is met. Since the objects in the scene are convex, the iteration of eventually converge to the closest point.</li></ol><p><strong>The case for using FPGA</strong></p><p>The Hill Climb solution offers a significant performance advantage over the naive approach by only accessing and computing vertices on the search path. This approach, however, is inherently sequential and varies in the number of iterations needed to converge across the grid. This variability makes it a suitable case for FPGA, where GPUs falls short. FPGAs, with their reconfigurable logic, enable fine-grained load balancing across parallel cores. This adaptability allows FPGAs to maintain high efficiency and utilization, as they can dynamically redistribute tasks ( job stealing) to manage points in space requiring different numbers of iterations. This feature is particularly advantageous for algorithms like Hill Climb, where workload variability is a key challenge.</p><p><strong>Performance &amp; Efficiency Advantages</strong></p><figure class="kg-card kg-image-card"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-3.19.11-PM.png" class="kg-image" alt="Signed Distance Field (SDF) using FPGA" loading="lazy" width="1060" height="618" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-21-at-3.19.11-PM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-21-at-3.19.11-PM.png 1000w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-3.19.11-PM.png 1060w" sizes="(min-width: 720px) 720px"></figure><p>We achieved a significant reduction in latency (45.4% lower) on the Ultra96v2 board at 150 MHz compared to a sequential CPU at 1.5 GHz on the test scene. </p><p><strong>Why this work may be useful</strong></p><p>In robotics, where SDF calculation latency is critical, this FPGA-based solution offers a hardware-accelerated, energy-efficient alternative. Unlike GPUs, which might perform better by processing more grid points simultaneously but consume more energy, the FPGA can leverage the algorithmic efficiency of hill climbing methods. This makes FPGAs particularly suitable for scenarios demanding both high performance and energy efficiency, as they can optimize SDF calculations effectively without the energy intensity of brute-force GPU approaches.</p><p><strong>Why this work may not be useful</strong></p><p>In preliminary benchmarks, a brute-force GPU implementation on an RTX 3060 has proven faster than the FPGA solution. This comparison must also consider the extra human effort required for design space exploration when adapting the algorithm to new FPGA hardware. This process, necessary for optimizing performance on different FPGA platforms, can be time-consuming.</p><p><strong>Verdict: Only useful in special use cases where absolute optimal effeicency / latency are required</strong></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Customizing Textile and Tactile Skins for Interactive Industrial Robots]]></title><description><![CDATA[<p><a href="https://arxiv.org/abs/2308.03072?ref=collections.bobobobobobo.net">paper</a></p>
<div class="kg-card kg-file-card"><a class="kg-file-card-container" href="https://collections.bobobobobobo.net/content/files/2024/01/Customizing-Textile-and-Tactile-Skins-for-Interactive-Industrial-Robots.pdf" title="Download" download><div class="kg-file-card-contents"><div class="kg-file-card-title">Customizing Textile and Tactile Skins for Interactive Industrial Robots</div><div class="kg-file-card-caption">Slides</div><div class="kg-file-card-metadata"><div class="kg-file-card-filename">Customizing Textile and Tactile Skins for Interactive Industrial Robots.pdf</div><div class="kg-file-card-filesize">3 MB</div></div></div><div class="kg-file-card-icon"><svg viewbox="0 0 24 24"><defs><style>.a{fill:none;stroke:currentColor;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5px;}</style></defs><title>download-circle</title><polyline class="a" points="8.25 14.25 12 18 15.75 14.25"/><line class="a" x1="12" y1="6.75" x2="12" y2="18"/><circle class="a" cx="12" cy="12" r="11.25"/></svg></div></a></div><p>In this work, we explore using textile and tactile skins for human-robot interactions and</p>]]></description><link>https://collections.bobobobobobo.net/customizing-textile-and-tactile-skins-for-interactive-industrial-robots/</link><guid isPermaLink="false">65acb46cea4edd0001ffe1f0</guid><dc:creator><![CDATA[Bo]]></dc:creator><pubDate>Fri, 30 Jun 2023 20:49:00 GMT</pubDate><media:content url="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-1.07.24-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-1.07.24-AM.png" alt="Customizing Textile and Tactile Skins for Interactive Industrial Robots"><p><a href="https://arxiv.org/abs/2308.03072?ref=collections.bobobobobobo.net">paper</a></p>
<div class="kg-card kg-file-card"><a class="kg-file-card-container" href="https://collections.bobobobobobo.net/content/files/2024/01/Customizing-Textile-and-Tactile-Skins-for-Interactive-Industrial-Robots.pdf" title="Download" download><div class="kg-file-card-contents"><div class="kg-file-card-title">Customizing Textile and Tactile Skins for Interactive Industrial Robots</div><div class="kg-file-card-caption">Slides</div><div class="kg-file-card-metadata"><div class="kg-file-card-filename">Customizing Textile and Tactile Skins for Interactive Industrial Robots.pdf</div><div class="kg-file-card-filesize">3 MB</div></div></div><div class="kg-file-card-icon"><svg viewbox="0 0 24 24"><defs><style>.a{fill:none;stroke:currentColor;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5px;}</style></defs><title>download-circle</title><polyline class="a" points="8.25 14.25 12 18 15.75 14.25"/><line class="a" x1="12" y1="6.75" x2="12" y2="18"/><circle class="a" cx="12" cy="12" r="11.25"/></svg></div></a></div><p>In this work, we explore using textile and tactile skins for human-robot interactions and control.</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/YGUV1dHuCRc?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="RobotSweater : Fabric Tactile Sensor &quot;Skin&quot;"></iframe></figure><p><strong>Why this work may be useful</strong></p><p>The use of textile and tactile skin in human-robot interaction and learning is highly beneficial due to the familiarity, affordability, durability, and customizability of textiles. This research showcases how tactile skin can be employed for safe robot control, presenting a compelling alternative to traditional force-torque sensors. This innovative approach enhances human-robot interactions, potentially leading to more intuitive and safer robotic systems in various applications.</p><p><strong>Why this work may not be useful</strong></p><p>Soft, printed pressure-sensitive PCBs may offer an alternative to textile and tactile skins, potentially providing greater accuracy in pressure sensitivity. Furthermore, future advancements could integrate pressure sensors directly into a robot&apos;s outer casing, potentially replacing the need for tactile skin. This integration could streamline design and enhance the functionality of robotic systems, offering a more seamless and efficient approach to detecting and responding to external human inputs.</p><p><strong>Verdict: Requires significant refinement to be useful &amp; competitive to other alternatives</strong></p>]]></content:encoded></item><item><title><![CDATA[Learning from Physical Human Feedback: An Object-Centric One-Shot Adaptation Method]]></title><description><![CDATA[<p>The paper introduces our innovative approach in robotics, Object Preference Adaptation (OPA), which focuses on adapting robot behavior based on physical human feedback. This method allows robots to understand and adjust to human preferences in real-time by interpreting human interventions to the robot related to specific objects. Our insight is</p>]]></description><link>https://collections.bobobobobobo.net/learning-from-physical-human-feedback-an-object-centric-one-shot-adaptation-method/</link><guid isPermaLink="false">65acab18ea4edd0001ffe187</guid><dc:creator><![CDATA[Bo]]></dc:creator><pubDate>Thu, 01 Jun 2023 06:40:00 GMT</pubDate><media:content url="https://collections.bobobobobobo.net/content/images/2024/01/thumbnail.png" medium="image"/><content:encoded><![CDATA[<img src="https://collections.bobobobobobo.net/content/images/2024/01/thumbnail.png" alt="Learning from Physical Human Feedback: An Object-Centric One-Shot Adaptation Method"><p>The paper introduces our innovative approach in robotics, Object Preference Adaptation (OPA), which focuses on adapting robot behavior based on physical human feedback. This method allows robots to understand and adjust to human preferences in real-time by interpreting human interventions to the robot related to specific objects. Our insight is that most human preferences can be attributed to object in the environment. </p><p><strong>Paper Website</strong></p><p><a href="https://alvinosaur.github.io/AboutMe/projects/opa/?ref=collections.bobobobobobo.net">https://alvinosaur.github.io/AboutMe/projects/opa/</a></p>
<p><strong>Adaptating the policy</strong></p><figure class="kg-card kg-image-card"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.33.01-AM.png" class="kg-image" alt="Learning from Physical Human Feedback: An Object-Centric One-Shot Adaptation Method" loading="lazy" width="1632" height="676" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-21-at-12.33.01-AM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-21-at-12.33.01-AM.png 1000w, https://collections.bobobobobobo.net/content/images/size/w1600/2024/01/Screenshot-2024-01-21-at-12.33.01-AM.png 1600w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.33.01-AM.png 1632w" sizes="(min-width: 720px) 720px"></figure><p>OPA operates by initially training a base policy to generate a range of behaviors. It then updates this policy online in response to human feedback. Notably, this adaptation requires only a single intervention from a human, enabling the robot to produce new behaviors that were not part of its initial training. This process leverages synthetic data, offering a cost-effective alternative to extensive human demonstrations.</p><p><strong>Why this work may be useful</strong></p><p>OPA significantly reduces the amount of training data required for teaching robots. By utilizing synthetic data and learning directly from a single human intervention, OPA circumvents the need for extensive, costly datasets typically used in robotic training by focusing on relationships between the human and the environment objects. This efficient use of data not only speeds up the training process but also makes it more generalizable accross users.</p><p><strong>Why this work may not be useful</strong></p><p>The effectiveness of the OPA method might be limited as it primarily attributes human preferences to object positions, potentially overlooking other critical factors such as object color, texture, or contextual environment and task. This narrow focus might not fully capture the complexity of human preferences, leading to a less comprehensive understanding and adaptability in real-world scenarios where such nuances play a significant role.</p><p><strong>Verdict: Useful for a restricted set of Human-Robot interaction settings/Collaboration scenarios</strong></p>]]></content:encoded></item><item><title><![CDATA[Parallel Minimum Distance Query Among Convex Meshes]]></title><description><![CDATA[<p>visit our detailed project report:<br>
<a href="https://15418-s23.github.io/final-project-website/?ref=collections.bobobobobobo.net">link</a> <a href="https://15418-s23.github.io/final-project-website/poster.pdf?ref=collections.bobobobobobo.net">poster</a> <a href="https://15418-s23.github.io/final-project-website/report.pdf?ref=collections.bobobobobobo.net">pdf</a> <a href="https://github.com/15418-s23/final-project?ref=collections.bobobobobobo.net">code</a></p>
<p><strong>What is this</strong></p><p>We present a parallel approach to measure the closest distance between 3D convex shapes, an important task in computer graphics and robotics. It uses two specialized computing methods, CUDA and OpenMP, to make this process faster and more</p>]]></description><link>https://collections.bobobobobobo.net/parallel-minimum-distance-query-among-convex-meshes/</link><guid isPermaLink="false">65acb040ea4edd0001ffe1bf</guid><dc:creator><![CDATA[Bo]]></dc:creator><pubDate>Wed, 03 May 2023 19:48:00 GMT</pubDate><media:content url="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.49.30-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.49.30-AM.png" alt="Parallel Minimum Distance Query Among Convex Meshes"><p>visit our detailed project report:<br>
<a href="https://15418-s23.github.io/final-project-website/?ref=collections.bobobobobobo.net">link</a> <a href="https://15418-s23.github.io/final-project-website/poster.pdf?ref=collections.bobobobobobo.net">poster</a> <a href="https://15418-s23.github.io/final-project-website/report.pdf?ref=collections.bobobobobobo.net">pdf</a> <a href="https://github.com/15418-s23/final-project?ref=collections.bobobobobobo.net">code</a></p>
<p><strong>What is this</strong></p><p>We present a parallel approach to measure the closest distance between 3D convex shapes, an important task in computer graphics and robotics. It uses two specialized computing methods, CUDA and OpenMP, to make this process faster and more efficient. </p><p><strong>The Naive Approach</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.55.09-AM.png" class="kg-image" alt="Parallel Minimum Distance Query Among Convex Meshes" loading="lazy" width="1928" height="788" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-21-at-12.55.09-AM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-21-at-12.55.09-AM.png 1000w, https://collections.bobobobobobo.net/content/images/size/w1600/2024/01/Screenshot-2024-01-21-at-12.55.09-AM.png 1600w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.55.09-AM.png 1928w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">The GJK Algorithm</span></figcaption></figure><p><strong>Our Optimized, Parallel Approach</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.57.12-AM.png" class="kg-image" alt="Parallel Minimum Distance Query Among Convex Meshes" loading="lazy" width="1332" height="1396" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-21-at-12.57.12-AM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-21-at-12.57.12-AM.png 1000w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.57.12-AM.png 1332w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Our optimized approach involves 2 stages: AABB filtering and GJK algorithm</span></figcaption></figure><p><strong>The Performance Advantage</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.58.37-AM.png" class="kg-image" alt="Parallel Minimum Distance Query Among Convex Meshes" loading="lazy" width="2000" height="1147" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-21-at-12.58.37-AM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-21-at-12.58.37-AM.png 1000w, https://collections.bobobobobobo.net/content/images/size/w1600/2024/01/Screenshot-2024-01-21-at-12.58.37-AM.png 1600w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.58.37-AM.png 2280w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Our approach significantly sped up minimum distance query through parallelization</span></figcaption></figure><p><strong>Why this work may be useful</strong></p><p>This work has already proved useful in computing minimum distance among convex objects as a subroutine in <a href="https://arxiv.org/abs/1709.00627?ref=collections.bobobobobobo.net">The Convex Feasible Set Algorithm</a>, significantly sped up its real-time performance.</p>
<p><strong>Why this work may not be useful</strong></p><p>In scenarios where there are only a small number of objects, the benefits of parallel implementation are not fully realized due to the inherent sequential nature of the Gilbert-Johnson-Keerthi (GJK) algorithm.</p><p><strong>Verdict: Already Useful</strong></p>]]></content:encoded></item><item><title><![CDATA[Temporal relation extraction with a graph-based deep biaffine attention model]]></title><description><![CDATA[<p>Our recent research focuses on improving temporal relation extraction from unstructured text. We&apos;ve developed a novel model that not only employs deep learning techniques but also integrates temporal logic for more effective relation extraction. This approach addresses the limitations of traditional models by generating a synthetic dataset, which</p>]]></description><link>https://collections.bobobobobobo.net/temporal-relation-extraction-with-a-graph-based-deep-biaffine-attention-model/</link><guid isPermaLink="false">65aca67dea4edd0001ffe15d</guid><dc:creator><![CDATA[Bo]]></dc:creator><pubDate>Wed, 23 Feb 2022 06:39:00 GMT</pubDate><media:content url="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.14.32-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.14.32-AM.png" alt="Temporal relation extraction with a graph-based deep biaffine attention model"><p>Our recent research focuses on improving temporal relation extraction from unstructured text. We&apos;ve developed a novel model that not only employs deep learning techniques but also integrates temporal logic for more effective relation extraction. This approach addresses the limitations of traditional models by generating a synthetic dataset, which is then used in conjunction with dependency parsers. This method improves the model&apos;s ability to learn and understand complex temporal relationships in text.</p><p>visit our detailed project report:<br>
<a href="https://arxiv.org/abs/2201.06125v1?ref=collections.bobobobobobo.net">link</a><br>
<a href="https://arxiv.org/pdf/2201.06125v1.pdf?ref=collections.bobobobobobo.net">pdf</a></p>
<p><strong>Approach</strong></p><p>Our approach utilizes a layered architecture to extract temporal relations from text. At the base, we have BERT embeddings that capture contextual information from the input. These feed into a Bi-LSTM layer that adds sequential understanding. For specific relation extraction, we use a biaffine attention mechanism with Multilayer Perceptrons (MLPs) to focus on the dependency (arc) and the type of relationship (relation) between events. This structure enables precise predictions of temporal relations, forming a graph that represents the interconnectedness of events within the text.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.13.45-AM.png" class="kg-image" alt="Temporal relation extraction with a graph-based deep biaffine attention model" loading="lazy" width="1230" height="1106" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-21-at-12.13.45-AM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-21-at-12.13.45-AM.png 1000w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-21-at-12.13.45-AM.png 1230w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Model Architecture</span></figcaption></figure><p><strong>Achieveing SOTA accuracy while processing more sentence/second</strong></p><p>Benchmarks show that in a fixed amount of time, our model processes more sentences that previous studies, while providing comparable accuracy. This is primary due to our model design that removes the event extraction dependency for relation extraction. The prediction of temporal relation labels is performed in parallel with arc prediction, eliminating the need of the non-parallelizable, nested for-loop for extracting event pairs in the previous studies.</p><p><strong>Why this work may be useful</strong></p><p>This work is useful because it exploits the logical structure of temporal relation tasks. By aligning with temporal logic principles and employing sophisticated language models, it has the potential to surpass current performance benchmarks in understanding textual timelines. This has potential in advancing applications in areas that require detailed, precise comprehension of events over long time horizon.</p><p><strong>Why this work may not be useful</strong></p><p>The work may be less impactful in certain contexts, particularly in light of the advances made by large language models. These LLMs have demonstrated superior accuracy in temporal relation tasks and are adept at recognizing even subtle and obscure events. This capability stems from their extensive training on diverse datasets, enabling them to understand and predict complex temporal patterns. Consequently, specialized approaches like the one in question might not offer additional benefits over these highly efficient, generalized models in certain applications.</p><p><strong>Vertdict: Might be useful in the future given significant breakthough in Neuro-symbolic AI</strong></p>]]></content:encoded></item><item><title><![CDATA[NoDistort: Drawing Distortion Recovery System for Shaky Screens]]></title><description><![CDATA[<p>Have you ever tried writing on your touchscreen device while on the move, only to end up with illegible scribbles? This common frustration is what the &quot;NoDistort&quot; project aims to eliminate. The project addresses the challenge of handwriting distortion on shaky or moving touchscreens, a problem faced by</p>]]></description><link>https://collections.bobobobobobo.net/nodistort-drawing-distortion-recovery-system-for-shaky-screens/</link><guid isPermaLink="false">65aca092ea4edd0001ffe132</guid><dc:creator><![CDATA[Bo]]></dc:creator><pubDate>Wed, 19 Dec 2018 05:00:00 GMT</pubDate><media:content url="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-20-at-11.49.09-PM.png" medium="image"/><content:encoded><![CDATA[<img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-20-at-11.49.09-PM.png" alt="NoDistort: Drawing Distortion Recovery System for Shaky Screens"><p>Have you ever tried writing on your touchscreen device while on the move, only to end up with illegible scribbles? This common frustration is what the &quot;NoDistort&quot; project aims to eliminate. The project addresses the challenge of handwriting distortion on shaky or moving touchscreens, a problem faced by many in our fast-paced, mobile world.</p><p>visit our detailed project report:<br>
<a href="https://www.ntsec.edu.tw/science/detail.aspx?a=21&amp;cat=13363&amp;sid=13413&amp;ref=collections.bobobobobobo.net">link</a><br>
<a href="https://www.ntsec.edu.tw/article/FileAtt.ashx?id=11223&amp;ref=collections.bobobobobobo.net">pdf</a></p>
<p><strong>Merging Sensor Data with Smart Algorithms</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-20-at-11.58.59-PM.png" class="kg-image" alt="NoDistort: Drawing Distortion Recovery System for Shaky Screens" loading="lazy" width="1812" height="996" srcset="https://collections.bobobobobobo.net/content/images/size/w600/2024/01/Screenshot-2024-01-20-at-11.58.59-PM.png 600w, https://collections.bobobobobobo.net/content/images/size/w1000/2024/01/Screenshot-2024-01-20-at-11.58.59-PM.png 1000w, https://collections.bobobobobobo.net/content/images/size/w1600/2024/01/Screenshot-2024-01-20-at-11.58.59-PM.png 1600w, https://collections.bobobobobobo.net/content/images/2024/01/Screenshot-2024-01-20-at-11.58.59-PM.png 1812w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Experimental Validation Setup</span></figcaption></figure><p>NoDistort&apos;s approach is as ingenious as it is effective. By harnessing the power of a device&apos;s internal motion sensors, coupled with a sophisticated algorithm, this technology promises to revolutionize how we interact with our touchscreens. It&apos;s not just about tracking; it&apos;s about understanding and compensating for the device&apos;s movements in real-time, ensuring that what you write is what appears on the screen &#x2013; clear and precise.</p><p><strong>Why NoDistort may be useful</strong></p><p>NoDistort is particularly useful as it significantly improves the usability and accuracy of touchscreen devices in unstable or mobile environments. This technology is beneficial for users who need to write or draw on their devices while in motion, such as in a vehicle or walking. By compensating for movement-related distortions, it ensures legibility and precision in digital handwriting, enhancing the overall user experience.</p><p><strong>Why NoDistort may not be useful</strong></p><p>Given the advancements in handwriting recognition algorithms, which are increasingly capable of interpreting and correcting distorted inputs, the utility of NoDistort may be somewhat diminished. These algorithms have evolved to learn from training data, including distorted handwriting, thereby enhancing their accuracy in varied conditions. Additionally, humans inherently possess a level of self-correction when writing in shaky environments. This natural adaptability, combined with the sophistication of current recognition software, suggests that the need for an external solution like NoDistort could be less critical in many situations.</p><p><strong>Verdict: Not useful anymore given current technology advancement</strong></p>]]></content:encoded></item></channel></rss>