This is a storybook components library based on Tailwind. The main idea of this repo is to be a custom components backup idealize to be short and simple and integrate with your react app.
We based the design structure in Atomic Design, which has an more condensed way to organize components. But we are not here to explain what Atomic Design is, but how it can improve the way you work with components.
Ilustrations
This is where comes the illustrations
Contributing
Do you want to contribute with us?
Check this rules how to help us đ
This does not always work correctly with Puppet Enterprise 2016.
PE purges pugin-synced facts directories on each run.
This removes fact files Puppet’s agent thinks came from custom facts.
Module Description
As mentioned in many getting started with Puppet guides, including some by
Puppet Labs, caching a fact can be useful.
A well-maintained cache can:
reduce frequency of expensive calls
store values reachable outside of Puppet agent runs
explicitly control schedule of fact refreshing
There is limited planned support in Facter 2.0 and later for controlling some
caching of Puppet facts. Personally this developer has never seen issues with it in the wild.
No, this is not yet-another-varnish module either.
Setup
What facter_cacheable affects
Deploys a feature, facter_cacheable, which is usable for custom facts written
by Puppet developers.
Setup Requirements
caching using this module requires at lest Ruby 2.3 and Puppet 4.7. Older releases cannot even run the test harness anymore.
PluginSync must be enabled on at least one Puppet agent run to deploy the module.
Beginning with facter_cacheable
Usage
This module accepts no customization. The facter\_cache() call takes options for:
the value to cache
a time-to-live(ttl)
an optional location to store the cache in.
If the directories containing the cache files do not exist, the module will attempt to
create them.
#
# my_module/lib/facter/my_custom_fact.rb
#
require 'facter'
require 'puppet/util/facter_cachable'
Facter.add(:my_custom_fact) do
confine do
Puppet.features.facter_cacheable?
end
setcode do
# 24 * 3600 = 1 day of seconds
cache = Facter::Util::FacterCacheable.cached?(:my_custom_fact, 24 * 3600)
if ! cache
my_value = some_expensive_operation()
# store the value for later
Facter::Util::FacterCacheable.cache(:my_custom_fact, my_value)
# return the expensive value
my_value
else
# return the cached value (this may need processing)
cache
end
end
end
It is not required but encouraged to keep the name of the cache and fact
the same. Although with all Ruby programming sanity is optional as it
having documentation.
YAML stored values may appear as arrays or string-indexed hashes depending on
the version of Puppet and Facter involved. Unpacking those is left as an
exercise for the reader.
Testing Code
To test code that uses Facter_cacheable you will have to resort to a little
used method for stubbing objects.
In your Facter fact guard against import of the module. Import will fail if you
do not have it deployed to the Puppet environment on which the tests are running.
Note: even the rSpec setup will not properly install this utility for testing.
beginrequire'facter/util/facter_cacheable'rescueLoadError=>eFacter.debug("#{e.backtrace[0]}: #{$!}.")end# regular fact like the complete example above
In the rSpec Facter tests, normally some kind of function test on
Facter.value(), setup a harness which can check for invocation of the cache
functions.
context'test caching'dolet(:fake_class){Class.new}before:eachdoallow(File).toreceive(:exist?).and_call_originalallow(Puppet.features).toreceive(:facter_cacheable?){true}Facter.clearendit'should return and save a computed value with an empty cache'dostub_const("Facter::Util::FacterCacheable",fake_class)expect(Facter::Util::FacterCacheable).toreceive(:cached?).with(:my_fact,24 * 3600){nil}expect(Facter::Util::Resolution).toreceive(:exec).with('some special comand'){mydata}expect(Facter::Util::FacterCacheable).toreceive(:cache).with(:my_fact,mydata)expect(Facter.value(:my_fact).toeq(mydata)endit'should return a cached value with a full cache'dostub_const("Facter::Util::FacterCacheable",fake_class)expect(Facter::Util::FacterCacheable).toreceive(:cached?).with(:my_fact,24 * 3600){mydata}expect(mod).to_notreceive(:my_fact)expect(Facter.value(:my_fact)).toeq(mydata)endend
The key parts are the :fake_class and the stub_const() calls. These setup
a kind of double that can be used by rSpec to hook into the Facter context.
Supports F/OSS Puppet 4.7.0+. Tested on AIX, recent vintage Solaris, SuSE, RedHat and RedHat-derivatives.
Does not support Puppet Enterprise due to the cached value wipe on each run.
Don’t be surprised if is works elsewhere, too. Or if it sets your house on fire.
The name of this module, facter_cacheable, was chosen to not conflict with other
existing implementations such as the Facter::Util::Cacheable support in early
implementations of waveclaw/subscription_manager.
Development
Please see CONTRIBUTING for advice on contributions.
This project’s goal is to allow users to emulate all of the features of OCPP (both 1.6 & 2.0.1) in order to allow
easier testing and speed up local development, here is an overview of what has been implemented in the project so
far
OCPP 1.6
Core (Done)
Firmware Management (Done)
Local Auth List Management (Done)
Reservation (Not Done)
Smart Charging (Semi Done)
Remote Trigger (Done)
OCPP 2.0.1
Currently under development, the OCPP 2.0.1 version is not yet fully implemented, but we’re working on it.
How to run
If you’re using Intellij IDEA, you can just run one of the two configurations that are saved in the .run folder
Run V16 for OCPP 1.6
Run V201 for OCPP 2.0.1
If you’re just using the terminal, you can run the following command:
OCPP 1.6
./gradlew run v16:run
OCPP 2.0.1
./gradlew run v201:run
What’s up with the đ¤?
Clicking the icon gives access to “message interception”. The primary purpose is to have a high degree of control over which messages
are sent and received by the charge point. That way it is possible to replicate potentially buggy behavior or custom implementations in
a one-off manner without needing to change the actual programming of the charge point. For “normal operation” of the charge point the
standard interface should be sufficient.
Also note that the message interception functions are not hooked up to the internal machinery of the charge point. For example, sending
a StopTransaction message will not actually change the state of an ongoing charge to be stopped. That means using these functions
also makes it very easy to put the charge point into a state that does not match up with what the CSMS is expecting, which can quickly
lead to unexpected behavior.
Executables
If you only care about running the application you can find the latest release on
the releases page we are currently building executables for
Windows, Linux and MacOS.
How to contribute
We welcome contributions from everyone who is willing to improve this project. Whether you’re fixing bugs, adding new
features, improving documentation, or suggesting new ideas, your help is greatly appreciated! Just make sure you
follow these simple guidelines before opening up a PR:
Follow the Code of Conduct: Always adhere to the Code of Conduct and be respectful of others
in the community.
Test Your Changes: Ensure your code is tested and as bug-free as possible.
Update Documentation: If you’re adding new features or making changes that require it, update the documentation
accordingly.
Dynamsoft Label Recognizer Samples for .NET edition
â ď¸Notice: This repository has been archived. For the latest examples utilizing label recognition features, please visit the Dynamsoft Capture Vision Samples repository. đ
This repository contains multiple samples that demonstrate how to use the Dynamsoft Label Recognizer .NET Edition.
System Requirements
Windows:
Supported Versions: Windows 7 and higher, or Windows Server 2003 and higher
Architecture: x64 and x86
Development Environment: Visual Studio 2012 or higher.
This sample demonstrates the simplest way to recognize text from image files in a directory with Dynamsoft Label Recognizer SDK.
License
The library requires a license to work, you use the API LicenseManager.InitLicense to initialize license key and activate the SDK.
These samples use a free public trial license which require network connection to function. You can request a 30-day free trial license key from Customer Portal which works offline.
All the resources used for the ‘PoeticScrapper’ Telegram bot. Please note that I used a laptop running Windows10 and I created, edited, and ran the subsequent codes via Atom (in which I installed additional packages on it).
The code(s) included in this repository:
a) telegramtestbed.py – this piece of code is mainly used to test the functionality of the bot (when it is newly created by BotFather), such as sending/receiving messages from specific users. This code need not be used anymore once the bot’s functionality is verified.
b) asynciobot.py – this piece of code is important to handle the user requests in terms of retrieving the required information from the website, i.e. establishing the asynchronous input/output characteristics of the bot, the type of replies to be sent to the user, and how long the bot will run for.
c) webscraper.py – this piece of code is the ‘heart of the telegram bot’ as it contains the main functions required for the web-scraping to be executed by the telegram bot.
d) poeticscrapperfinalcode.py – this piece of code is the final product that houses all the lines of code required to run the telegram bot locally. It possesses functions that are defined to (i) map keywords to specific actions and (ii) specify the elements on the webpage that will be scraped by the bot. All of these are enabled by the aforementioned bot1.py code (that has been inserted into this piece of code).
DISCLAIMER: The bot can be accessed by searching @PoeticScraper_bot on Telegram. However, do note that it can only run locally (on my laptop terminal) as of right now.
Note that scrapefm1.py (in ‘initial football manager idea’ folder) is the piece of code that I created for my initial project idea (SCRAPPED), and it does not have any bearing on the current Poetic Scraper bot. I inserted it into this repository as evidence to support my explantory piece.
Build should work on any Linux distribution, but packaging
and architecture detection will only work on Debian and
Ubuntu. By default, everything is compiled for the host
architecture, but cross-compilation is available. Currently
armhf and arm64 are supported if the required tools are
available. The default compiler is clang, but GCC can also
be used. Builds were tested with Ubuntu 16.04 and 18.04,
with all the latest updates installed.
There are three build modes you can choose from:
- DEVELOPMENT (default)
Development mode will build and install all components into
one subdirectory under the build directory. Useful when an
IDE needs a full include path and for debugging. By default,
products are installed into the “devtree-” subdirectory.
- PACKAGE
Package mode will build .deb packages for each target –
and set /opt/swarmio as the install prefix. Find the packages
under the “packages” subdirectory.
- INSTALL
Install will install all outputs onto the local machine,
by default under /opt/swarmio.
To build, create a new directory outside the source tree,
and issue the following commands inside it:
cmake <PATH TO SOURCE> [optional parameters]
cmake --build .
Optional parameters:
-DSWARMIO_BUILD_MODE=<MODE> (DEVELOPMENT, PACKAGE or INSTALL)
Specifies build mode (see above). DEVELOPMENT build will
generate debug executables, while PACKAGE and INSTALL builds
will compile with release flags (and debug symbols).
Override build type derrived from build mode. See CMake docs
for more info.
-DCMAKE_INSTALL_PREFIX=<PATH>
Overrides the installation directory. For DEVELOPMENT and
INSTALL build, this will override the output directory. For
PACKAGE builds, it will override the installation prefix
of the generated packages.
-DSWARMIO_TARGET_ARCHITECTURE=<ARCH> (armhf or arm64)
Enables cross-compilation – required tools must already be
installed. On Ubuntu systems, crossbuild-essential-
packages should do the trick.
-DSWARMIO_BUILD_ROS_NODE=ON
If turned on, swarmros will be built.
-DSWARMIO_MULTISTRAP_CONFIGURATION=<CONFIG>
If configured, a full multistrap sysroot will be initialized
before cross-compilation. Requires multistrap and a bunch of
other tools to be installed. Currently supported configurations:
xenial (Ubuntu 16.04)
xenial-ros (Ubuntu 16.04 with ROS Kinetic Kame)
bionic (Ubuntu 18.04)
bionic-ros (Ubuntu 18.04 with ROS Melodic Morenia)
-DSWARMIO_SYSROOT=<SYSROOT PATH>
If multistrap is not used, a sysroot needs to be manually set
for cross-compilation to work. Ensure that the system image
contains no absolute symbolic links before using it.
-DSWARMIO_GCC_VERSION=<GCC VERSION>
If multistrap is not used, this should be set to indicate the
GCC version present in the manually specified sysroot in order
to help compilers find the correct libraries to link against.
-DSWARMIO_ROS_PREFIX=<PATH>
Specifies the location of the ROS installation to use when
building the ROS node. If not specified, the script will try
to automatically detect it – by looking for the default
installation directory of Kinetic Kame and Melodic Morenia.
-DSWARMIO_PREFER_GCC=ON
If specified, GCC will be used instead of clang. Please note
that cross-compilation will most likely only work if the
same operating system (with a different architecture) is
used on the host machine. Requires GCC 6 or later.
Building on Windows
On Windows, only DEVELOPMENT mode is supported. Building the
ROS node and multistrap environments are not supported. Basic
build command is the same as on Linux:
cmake <PATH TO SOURCE> [optional parameters]
cmake --build .
Marvin is a shell script to set up an Mac OS laptop for development.
It can be run multiple times on the same machine safely. It installs, upgrades, or skips packages based on what is already installed on the machine.
We support:
macOS Mavericks (10.9)
macOS Yosemite (10.10)
macOS El Capitan (10.11)
macOS Sierra (10.12)
Older versions may work but aren’t regularly tested. Bug reports for older
versions are welcome.
Install
In your Terminal window, copy and paste the command below, then press return.
curl --silent https://raw.githubusercontent.com/ravisuhag/marvin/master/mac | sh 2>&1| tee ~/marvin.log
The script itself is available in this repo for you to review if you want to see what it does and how it works.
Once the script is done, quit and relaunch Terminal.
It is highly recommended to run the script regularly to keep your computer up to date. Once the script has been installed, you’ll be able to run it at your convenience by typing laptop and pressing return in your Terminal.
Your last marvin run will be saved to ~/marvin.log.
Read through it to see if you can debug the issue yourself.
If not, copy the lines where the script failed into a
new GitHub Issue for us.
Or, attach the whole log file as an attachment.
marvin is inspired by laptop script, customized for my own needs. It is free software, and may be redistributed under the terms specified in the LICENSE file.