Category: Blog

  • ts-collections

    TS-Collections

    Installation

    Add this library using npm or yarn from your terminal.

    NPM

    npm i ts-collection-set
    YARN

    yarn add ts-collection-set

    Usage

    Import the collections library to your typescript file.

    import * as Collections from "ts-collection-set"

    Dictionary (HashMap)

    Initialize a dictionary instance as follows.

    const englishDictionary: Collections.Dictionary<string, string> = new Collections.Dictionary();

    Add new elements to your dictionary instance as follows. The first parameter being the elements key and the second being its value.

    englishDictionary.setValue("A", "First english alphabet character.");
    englishDictionary.setValue("Grey", "A metallic shade of black");
    englishDictionary.setValue("Doctor", "An expert in a particular field of training");

    Check if the dictionary contains a particular key using this method.

    englishDictionary.containsKey("key");

    Remove an element from the dictionary instance using its key as follows.

    englishDictionary.remove("Key");

    Get an array of key elements as follows.

    englishDictionary.keys();

    Set (ArrayList)

    Initialize a set instance as follows.

    const numArray: Collections.Set<string> = new Collections.Set();

    Add elements to your set instance as follows.

    numArray.add("1");
    numArray.add("2");
    numArray.add("3");
    numArray.add("4");

    Remove an element from your set instance as follows.

    numArray.remove("2");

    Check to see if is your set instance contains a particular element using the following method.

    numArray.contains("1");

    License

    MIT License

    Copyright (c) 2020 Ian Mugambi

    Permission is hereby granted, free of charge, to any person obtaining a copy
    of this software and associated documentation files (the “Software”), to deal
    in the Software without restriction, including without limitation the rights
    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    copies of the Software, and to permit persons to whom the Software is
    furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all
    copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
    SOFTWARE.

    Visit original content creator repository

  • sumologic-sdk-ruby

    Sumo Logic Ruby SDK

    Gem Version Build Status Code Climate Scrutinizer Code Quality Docs License

    Ruby interface to the Sumo Logic REST API.

    Usage

    The Ruby SDK is ported from the Sumo Logic Python SDK.

    The following methods are currently implemented:

    sumo = SumoLogic::Client.new access_id, access_key
    
    # Search
    r = sumo.search query [, from, to, time_zone]
    
    r = sumo.search_job query [, from, to, time_zone]
    
    r = sumo.search_job_messages {'id' => search_job_id}, limit, offset
    
    r = sumo.search_job_records {'id' => search_job_id}, limit, offset
    
    r = sumo.search_job_status {'id' => search_job_id}
    
    # Dashboards
    r = sumo.dashboards
    
    r = sumo.dashboard dashboard_id
    
    r = sumo.dashboard_data dashboard_id
    
    # Collectors
    r = sumo.collectors [limit, offset]
    
    r = sumo.collector collector_id
    
    r = sumo.update_collector collector, etag
    
    r = sumo.delete_collector collector
    
    # Sources
    r = sumo.sources collector_id [, limit, offset]
    
    r = sumo.source collector_id, source_id
    
    r = sumo.update_source collector_id, source, etag
    
    r = sumo.delete_source collector_id, source
    
    # Content
    r = sumo.create_content path, data
    
    r = sumo.get_content path
    
    r = sumo.delete_content path
    
    # Low-Level
    r = sumo.post path, data

    Note, for the search methods, the query parameter can be exactly the same query that is entered into the Sumo Logic web console.

    Example scripts are located in the scripts directory of the GitHub repo.

    Change Log

    See CHANGELOG.md.

    Links

    Project Repo

    Sumo Logic API Documentation

    Sumo Logic Python SDK

    Contributions

    Please add your scripts and programs to the scripts folder.

    Any reports of problems, comments or suggestions are most welcome.

    Please report these on Github

    License

    Sumo Logic Ruby SDK is available under an MIT-style license. See LICENSE.md for details.

    Sumo Logic Ruby SDK © 2015-2023 by John Wang

    Visit original content creator repository
  • RISCV-Simulator

    RISCV-Simulator

    An instruction set simulator for the RISC-V architecture written in Java.
    Written as the last assignment for the course “02155: Computer Architecture and Engineering” at the Technical University of Denmark

    Simulates the RV32I Base Instruction Set (excluding EBREAK, CSR*, fence* and some environment calls)

    Environment Calls

    ID x10 Name Description
    1 print_int Prints integer in x11
    4 print_string Prints null-terminated string whose address is in x11
    10 exit Stops execution
    11 print_char Prints character in x11

    Compiling and running

    Install packages

    If you haven’t run a JavaFX application on Ubuntu before run the following command:

    sudo apt-get install openjfx
    

    Java Development Kit 8

    Compile

    Assuming no other Java files present:

    cd path/to/package/files
    javac *.java
    

    Run

    Assuming current work directory contains RISCVSimulator package directory:

    cd path/to/package/
    java RISCVSimulator.Main
    

    OpenJDK 11

    As OpenJDK no longer supplies a runtime environment or JavaFX, it is required to have OpenJFX downloaded.
    The path to OpenJFX will be referred to as %PATH_TO_FX%.

    Compile

    cd path/to/package/files
    javac --module-path %PATH_TO_FX% --add-modules javafx.fxml,javafx.base,javafx.controls,javafx.graphics *.java
    

    Run

    Requires a Java 11 Runtime Environment. This is easily obtained on Ubuntu through apt, but Windows users will need to use jlink to build their own. See Releases for example.
    Assuming current work directory contains RISCVSimulator package directory:

    cd path/to/package
    java --module-path %PATH_TO_FX% --add-modules javafx.fxml,javafx.base,javafx.controls,javafx.graphics RISCVSimulator.Main
    

    Unfortunately, the program was not written with modular Java support in mind. For this reason, there is no better way of running the program, as it’s not possible to use jlink in order to build the application with all dependencies bundled. Writing batch files or shell scripts is adviced.

    Visit original content creator repository

  • tweet_to_twitter

    Twitter Automation using Selenium

    This Python script demonstrates how to automate posting a tweet on Twitter using the Selenium web automation library. It opens a Chrome web browser, logs into a Twitter account, composes a tweet, and posts it.

    Prerequisites

    Before running the script, make sure you have the following prerequisites installed:

    1. Python

    2. Selenium library installed. You can install it using pip:

      pip install selenium
    3. Google Chrome browser installed.

    Usage

    1. Import the necessary libraries:

      from selenium import webdriver
      from selenium.webdriver.chrome.options import Options
      import time, os
    2. Configure Chrome options to store cookies and start the browser maximized:

      options = Options()
      options.add_argument(f"user-data-dir={os.getcwd()}cookies")
      options.add_argument("start-maximize")
      • The user-data-dir argument is used to specify the directory to store user data, including cookies, to maintain your Twitter login session.
    3. Create a Chrome webdriver instance:

      driver = webdriver.Chrome(options=options)
    4. Navigate to the Twitter website:

      driver.get("https://twitter.com")
    5. Wait for the Twitter page to load:

      time.sleep(2)
    6. Locate the tweet input field and send a tweet:

      sendmessage = driver.find_element_by_xpath("*//*[@contenteditable='true']")
      sendmessage.send_keys("test")
    7. Find and click the tweet button:

      kliktweet = driver.find_element_by_xpath("//div[@data-testid='tweetButtonInline']")
      kliktweet.click()
    8. Wait for a specified time (e.g., 60 seconds) to ensure the tweet is posted:

      time.sleep(60)
    9. Quit the Chrome driver:

      driver.quit()
    10. Customize the script to meet your specific automation needs, such as logging in with your Twitter account and posting the desired content.

    Notes

    • This script logs into Twitter using an existing account’s session data. Make sure to replace os.getcwd() with the appropriate directory path if needed.
    • Be cautious while automating actions on websites to comply with their terms of service.

    License

    This script is provided under the MIT License.

    
    You can customize the script and README.md file further to suit your specific requirements and ensure you have the correct WebDriver for Chrome installed.
    

    Visit original content creator repository

  • fxserver

    Logo

    fxserver

    Setup cloud as a VFX server.

       Last Commit      GitHub Stars  

    Table of Contents

    About

    Quick tutorial to setup a Cloud Server for multiple machines access, and VFX Pipeline on Windows, macOS and Linux. This repository is based on Google Drive VFX Server, with loads of improvements.

    Setup Server

    First, you’ll need to mount your Cloud server on your system, using any software you like (rclone, Google Drive File Stream, etc.)

    We can then start moving files around. The setup only relies on environment variables:

    • SERVER_ROOT: The root of the mounted Cloud server. This is the only value that needs to be changed depending on your setup
    • CONFIG_ROOT: The .config folder
    • ENVIRONMENT_ROOT: the .config/environment folder
    • PIPELINE_ROOT: the .config/pipeline folder

    You can now download the code from this repository and extract its content to your SERVER_ROOT. Using Z:/My Drive as the mounted Cloud server path, it should look like this:

    .
    └── 📁 Z:/My Drive/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline

    Which equals to:

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 $CONFIG_ROOT/
            ├── 📁 $ENVIRONMENT_ROOT
            └── 📁 $PIPELINE_ROOT

    You will need to modify SERVER_ROOT in .zshrc (Unix) and/or dcc.bat (Windows) by your mounted Cloud server path:

    • In .zshrc: export SERVER_ROOT="Path/to/drive/linux" (Line 12, 17, 21)
    • In dcc.bat: setx SERVER_ROOT "Path\to\drive\windows" (Line 9)

    Once the folder structure is created and the SERVER_ROOT value has been modified, you can now assign the environment variables:

    Windows

    Windows supports shell scripting after some manipulations but it’s way easier to “hard” write the environment variables by running dcc.bat.

    dcc.bat

    To check that everything is working:

    • Type Win + I to open the Windows Settings
    • Scroll to the bottom of the page and click About
    • Navigate to Device Specifications and press Advanced System Settings
    • In the System Properties dialogue box, hit Environmental Variables
    • The freshly created variables should be under User
    • Check is SERVER_ROOT has been defined with the right path

    Unix

    macOS and Linux are both Unix based OS. The simplest way is to migrate your shell to Zsh using chsh -s $(which zsh) in your terminal. You can then symlink .zshrc in your $HOME folder. To check that everything is working, restart your terminal and type echo $SERVER_ROOT: it should output your mounted Cloud server path.

    Warning

    .zshrc needs to be called exactly that way in $HOME to be picked up by the terminal: remove any alias or symlink added in the name.

    Warning

    The Make Alias command in macOS Finder won’t work properly. You should use this service instead to create proper Symlinks: Symbolic Linker

    Software

    This setup automatically links the following DCCs, using this folder structure:

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                ├── 📁 houdini               ──> Using $HSITE
                ├── 📁 maya                  ──> Using $MAYA_APP_DIR
                ├── 📁 nuke                  ──> Using $NUKE_PATH
                ├── 📁 other
                └── 📁 substance_painter
                    └── 📁 python            ──> Using $SUBSTANCE_PAINTER_PLUGINS_PATH

    The DDCs can be launched normally on Windows if the dcc.bat file has been used to define the environment variables.

    For macOS and Linux, you should start them from a terminal, in order to inherit the environment variables defined by .zshrc.

    You can find an example script for Houdini just here: houdini.sh.

    To access it quickly, we also defined an alias for houdini pointing to that script in aliases.sh. It will allow you to simply type this command to launch Houdini.

    Maya Maya

    WIP

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 maya/
                    └── 📁 2023/
                        ├── 📄 Maya.env
                        ├── 📁 prefs
                        ├── 📁 presets
                        └── 📁 scripts

    Substance Substance Painter

    WIP

    Note
    See Substance Painter environment variables

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 substance_painter/
                    └── 📁 python/
                        └── 📄 plugin.py

    Houdini Houdini

    Houdini will automatically scan the folder defined by $HSITE for any folder being named houdini<houdini version>/<recognized folder> such as otls or packages and load the content of those folders at Houdini startup.

    You can find two package file examples:

    Both taking advantage of the environment variables posteriorly defined.

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 houdini/
                    └── 📁 houdini19.5/
                        ├── 📁 desktop
                        ├── 📁 otls/
                        │   └── 📄 digital_asset.hda
                        └── 📁 packages/
                            └── 📄 package.json

    Nuke Nuke

    Nuke will scan the content of the folder defined by NUKE_PATH, searching for init.py and menu.py.

    You can find an init.py file example, showing how to load plugins on Nuke startup.

    .
    └── 📁 $SERVER_ROOT/
        └── 📁 .config/
            ├── 📁 environment
            └── 📁 pipeline/
                └── 📁 nuke/
                    ├── 📄 init.py
                    └── 📄 menu.py

    Useful Resources and Tools

    Contact

    Project Link: Cloud VFX Server

    GitHub   LinkedIn   Behance   Twitter   Instagram   Gumroad   Email   Buy Me A Coffee  

    Visit original content creator repository
  • xls2md

    Contributors Forks Stargazers Issues MIT License LinkedIn

    A quick (and rather specific) XLS to MD conversion tool for my local pedagogy management system.

    I’m improving programming skills as I go so please feel free to fork this repo and contribute, you can also: Report a Bug / Request Feature

    About this tool

    I’ve made this tool to support my own pedagogy management system that uses a local database made of Markdown files. I wanted a quick way to import student data to track locally. The database keeps me in touch with student trajectories and also helps with understanding where they are coming from, i.e. what courses/skills they have already picked up. I’m hoping to use this for my own post-human pedagogy research but that’s another discussion 😄.

    This tool is pretty barebones and is meant to give me a blank canvas for every student based off of their unique ID’s. I should mention that this tool is designed around the University of New South Wales’ (Australia) system where I work, and if you plan to use it you will need to make adjustments where necessary. That said, if you are from UNSW and find this useful, I’m glad I could have been of help!

    Getting Started

    Python

    This tool is made in Python and the code is open for scrutiny. Dependencies are required before you can use it from the CLI but installing a working copy of Python should be enough to run. To get a local copy up and running follow these simple example steps. I’m working on an executable for later

    Prerequisites

    Make sure you have Python installed in your system, if you’re on a Mac you can use Homebrew to install. The Homebrew webpage has instructions on how to install brew. Some dependencies are required and can be installed with both Homebrew and pip once you have Python setup.

    pip install pandas openpyxl

    pandas is needed for handling Excel files and openpyxl for working with .xlsx files, make sure both are included when installing.

    Installation

    There is no installation needed, you simply can clone the repo to a folder on your system.

    git clone https://github.com/haakmal/xls2md.git

    PS. I still haven’t gotten around to making an executable. If anyone with more experience in programming is willing to help, or explain how to reduce the file size I am all ears!

    Usage

    Please be advised that this tool is very specific for my needs and I would recommend if you are using this to tweak to your requirements.

    Setup

    1. I have a template MD file for all my students, this is where I collect information I need and add to for instance their weekly reports, discussions with them, etc
    2. I have a spreadsheet of students with required information (name, class, email, etc) that is fetched from our LMS. The script extracts the heading of each column as YAML data for the MD files and each row becomes a separate student file. The filename for my database requirements is set as the first column which in this case is an ID number.
    3. I have a list of tutors that are assigned to a student, I keep them also as MD files for my database and the script fetches the file names from a folder I pick so I can assign the tutor to the students record.

    The data.xlsx is an example of how the spreadsheet should be prepared. In the template file there are sections for where data is added from the script. For my purposes I have it set in two places, you may need to tweak this to your requirements: {{YAML_DATA}} and {{TITLE_DATA}}.

    How to use

    Once I have everything collected i.e. spreadsheet, template, list of tutors in a folder, run the script using a terminal; currently it can only be initiated from CLI. Follow these steps:

    1. Open a terminal
    2. Navigate to repo folder
    3. Depending on your version of Python start the program using one of these commands:
    python main.py
    python3 main.py

    This should start the GUI which then becomes self explanatory. Follow instructions selecting appropriate options and click the convert button. Each row from the spreadsheet will be extracted as individual MD files ready for your database!

    XLS2MD GUI

    Contributing

    Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

    If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag “enhancement”. Don’t forget to give the project a star if you found this helpful! Thanks again!

    1. Fork the Project
    2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
    3. Commit your Changes (git commit -m 'Add some AmazingFeature')
    4. Push to the Branch (git push origin feature/AmazingFeature)
    5. Open a Pull Request

    License

    Distributed under the MIT License. See LICENSE.txt for more information.

    Contact

    Dr Haider Ali Akmal – Links

    Project Link: https://github.com/haakmal/xls2md

    Visit original content creator repository
  • polarpy

    polarpy

    Tools for reading and fusing live data streams from Polar OH1 (PPG) and H10 (ECG) sensors.

    Requirements

    If installing from the repo you need pygatttool (pip install pygatttool).

    Installation

    pip install polarpy
    

    Usage

    The following code starts the raw PPG and IMU streams on a Polar OH1, fuses the blocks pf data in the two streams at 135Hz, and provides a single output stream, each record having a timestamp, the PPG signal values for each of the 3 pairs of LEDs, and the corresponding accerelometer x, y and z readings.

    from polarpy import OH1
    
    OH1_ADDR = "A0:9E:1A:7D:3C:5D"
    OH1_CONTROL_ATTRIBUTE_HANDLE = 0x003f
    OH1_DATA_ATTRIBUTE_HANDLE = 0x0042
    
    def callback(type: str, timestamp: float, payload: dict):
        print(f'{timestamp} {payload}')
    
    if '__main__' == __name__:
        device = OH1(address=OH1_ADDR,
                     control_handle=OH1_CONTROL_ATTRIBUTE_HANDLE,
                     data_handle=OH1_DATA_ATTRIBUTE_HANDLE,
                     callback=callback)
    
        if device.start():
            while device.run():
                pass
    

    The output looks something like this:

    3.94 {'ppg0': 263249, 'ppg1': 351764, 'ppg2': 351928, 'ax': 0.775, 'ay': -0.42, 'az': 0.476}
    3.947 {'ppg0': 263297, 'ppg1': 351964, 'ppg2': 352077, 'ax': 0.775, 'ay': -0.42, 'az': 0.476}
    3.954 {'ppg0': 263319, 'ppg1': 352062, 'ppg2': 352013, 'ax': 0.778, 'ay': -0.417, 'az': 0.481}
    3.962 {'ppg0': 263293, 'ppg1': 352106, 'ppg2': 352082, 'ax': 0.778, 'ay': -0.417, 'az': 0.481}
    3.969 {'ppg0': 263440, 'ppg1': 352273, 'ppg2': 352199, 'ax': 0.778, 'ay': -0.417, 'az': 0.481}
    
    ...
    

    The callback is used (rather than returning data from run()) because the blocks of PPG, ECG and IMU data arrive with different lengths and at different speeds. The individual samples from each channel must be buffered and interleaved, timestamps interpolated, then delivered asynchronously through the callback.

    The address and attribute handles for your particular device can be found using gatttool or another BLE tool such as nRF Connect.

    Visit original content creator repository

  • microapps-app-release

    Build/Deploy CI JSII Build Release

    Overview

    Example / basic Next.js-based Release app for the MicroApps framework.

    Table of Contents

    Screenshot

    Main View Screenshot of App

    Try the App

    Launch the App

    Video Preview of the App

    Video Preview of App

    Functionality

    • Lists all deployed applications
    • Shows all versions and rules per application
    • Allows setting the default rule (pointer to version) for each application

    Installation

    Example CDK Stack that deploys @pwrdrvr/microapps-app-release:

    The application is intended to be deployed upon the MicroApps framework and it operates on a DynamoDB Table created by the MicroApps framework. Thus, it is required that there be a deployment of MicroApps that can receive this application. Deploying the MicroApps framework and general application deployment instructions are covered by the MicroApps documentation.

    The application is packaged for deployment via AWS CDK and consists of a single Lambda function that reads/writes the MicroApps DynamoDB Table.

    The CDK Construct is available for TypeScript, DotNet, Java, and Python with docs and install instructions available on @pwrdrvr/microapps-app-release-cdk – Construct Hub.

    Installation of CDK Construct

    Node.js TypeScript/JavaScript

    npm i --save-dev @pwrdrvr/microapps-app-release-cdk

    Add the Construct to your CDK Stack

    See cdk-stack for a complete example used to deploy this app for PR builds.

    import { MicroAppsAppRelease } from '@pwrdrvr/microapps-app-release-cdk';
    
    const app = new MicroAppsAppRelease(this, 'app', {
      functionName: `microapps-app-${appName}${shared.envSuffix}${shared.prSuffix}`,
      table: dynamodb.Table.fromTableName(this, 'apps-table', shared.tableName),
      nodeEnv: shared.env as Env,
      removalPolicy: shared.isPR ? RemovalPolicy.DESTROY : RemovalPolicy.RETAIN,
    });
    Visit original content creator repository
  • userscripts

    userscripts

    These are some userscripts I made over the years, some of which used to be up on userscripts.org

    Most of them help with downloading manga/comic chapters from online readers, as I prefer offline readers like MComix.

    Contents

    update-metablocks is a small shell script that takes the

          ==Userscript==
          ...
          ==/Userscript==

    metablocks from userscript files, and copies them into .meta.js files.

    It’s features inclede automatic @date updating and insertion.

    For examples run ./update-metablocks --help for documentation or ./update-metablocks */*.user.js to do a dry run on all user.js files.

    A Proof of Concept script I created for a feature suggestion I made.

    It has a lot of issues, chief of which is that vertical scrolling with the keys does not seem to work, so I haven’t re-worked it as an installable userscript but it could serve as the basis of one.

    A script I made as a client-side implementation for this feature request.

    It toggles the visibility of tags on title pages to avoid possible spoilers, with an option to show them.

    Adds Download Links to Foolslide reader links on a (front) page.

    Is only set to work for Akashi Scans by default, but should work for any page you know contains links to a FoOlslide reader.

    Screenshots:
    Download links
    Multiple Download Links

    Provides direct image and external download links for the the MangaStream Reader.

    Screenshots:
    Navigation Menu Direct Links Torrent Links

    Adds hovering info boxes to links on MyAnimeList similar to the ones on Top Anime/Manga pages, to normal links.

    Note: This is a very dirty hack; it doesn’t work great, and it never will.

    Screenshots:
    On shared lists On profiles

    Hides non-creator posts on Patreon from a projects Activity page.

    Screenshots:
    Hide comments off Hide comments on

    A no-nonsense userscript that hides watched videos from your subscription inbox on Youtube.

    Visit original content creator repository