Best Binary Options Brokers 2020 - Platforms & Reviews

best binary signal provider - top 3 binary options signal software's 2017 - best signal providers

best binary signal provider - top 3 binary options signal software's 2017 - best signal providers submitted by TradingStrategys to u/TradingStrategys [link] [comments]

Bluehole - Let's talk Wellbia/XINGCOD3 user privacy risks for the sake of transparency

For those who don't know..
XINGCODE-3 is a kernel (ring0) privillege process under xhunter1.sys owned by the Korean company Wellbia (www.wellbia.com). Unlike what people say, Wellbia isn't owned or affiliated with Tencent, however, XINGCOD3 is custom designed contractor for each individual game - mainly operating in the APAC region, many of them owned by Tencent.
XINGCODE-3 is outsourced to companies as a product modified to the specific characteristics of the game. The process runs on the highest privilegied level of the OS upon boot and is infamous for being an essential rootkit - on a malware level, it has the highest vulnerability to be abused should Wellbia or any of the 3rd Party Companies be target of an attack.
It has been heavily dissected by the hacking community as being highly intrusive and reversed engineered (although nowadays still easily bypassable by a skilled and engaged modder by created a custom Win Framework).
While most is true for a standard anti-cheating, users should be aware that XINGCOD3 able to scan the entire user memory cache, calls for DLL's, including physical state API's such as GetAsyncKeyState where it scans for the physical state of hardware peripherals, essentially becoming a hardware keylogger. Studying the long history of reverse engineering of this software has shown that Wellbia heavily collects user data for internal processing in order to create whitelists of processes and strings analyzed by evaluating PE binaries - having full access to your OS it also is known to scan and having access to user file directories and collecting and storing paths of modified files under 48 hours for the sake of detecting possible sources of bypassing.
All this data is ultimately collected by Wellbia to their host severs - also via API calls to Korean servers in order to run services such as whitelists, improve algorithm accuracy and run comparative statistics and analysis based on binaries, strings and common flags.
Usually this is a high risk for any service, including BattleEye, EasyAntiCheat, etc. but what's worrying in Wellbia, thus. Bluehole's are actually a couple of points:
(not to mention you can literally just deny the service from installing, which by itself is already a hilarious facepalm situation and nowhere does the TSL call for an API of the service)
  1. Starting off, Wellbia is a rather small development company with having only one product available on the market for rather small companies, the majority hold by Chinese government and countries where the data handling, human rights and user privacy is heavily disregarded. This makes my tinfoil hat think that the studio's network security isn't as fortified as a Sony which had abused rootkits, just due to budget investment alone. Their website is absolutely atrocious and amateur - and for an international company that deals with international stakeholders and clients it's impressive the amount of poor english, errors and ambiguous information a company has in their presentation website - there's instances where the product name is not even correctly placed in their own EULA - if a company cannot invest even in basic PR and presentation something leaves me a bitter taste that their network security isn't anything better. They can handle user binaries but network security is a completely different work. The fact that hackers are easily able to heartbeat their API network servers leaves me confirming this.
  2. This the most fun one. Wellbia website and terms conditions explicitely say that they're not held accountable should anything happen - terms that you agree and are legally binded to by default by agreeing to Bluehole's terms and conditions:" Limitations of Company Responsibility
  1. IGNCODE3 is a software provided for free to users. Users judge and determine to use services served by software developers and providers, and therefore the company does not have responsibility for results and damages which may have occurred from XIGNCODE3 installation and use.
(the fact that in 1. they can't even care to write properly the name of their product means how little they care about things in general - you can have a look at this whole joke of ToS's that I can probably put more effort in writting it: https://www.wellbia.com/?module=Html&action=SiteComp&sSubNo=5 - so I am sorry if I don't trust where my data goes into)
3) It kinda pisses me that Bluehole adopted this in the midst of the their product got released post-purchase. When I initially bought the product, in nowhere was written that the user operative system data was being collected by a third party company to servers located in APAC (and I'm one of those persons who heavily reads terms and conditions) - and the current ToS's still just touch this topic on the slightest and ambiguously - it does not say which data gets collected, discloses who and where it's hold - "third party" could be literally anyone - a major disrespect for your consumers. I'm kinda of pissed off as when I initially purchase the product in very very early stages of the game I didn't agree for any kernel level data collection to be held abroad without disclosure of what data is actually being collected otherwise it would have been a big No on the purchase. The fact that you change the rules of the game and the terms of conditions in the midst of the product release leaves me with two options Use to Your Terms or Don't Use a product I've already purchased now has no use - both changes ingame and these 3rd party implementations are so different from my initial purchase that I feel like it's the equivalent of purchasing a shower which in the next year is so heavily modified that it decides to be a toilet.
I would really like for you Bluehole to show me the initial terms and conditions to when the game was initially released and offer me a refund once you decided to change the product and terms and conditions midway which I don't agree with but am left empty handed with no choice but to abandon the product - thus making this purchase a service which I used for X months and not a good.
I really wish this topic had more visibility as I know that the majority of users are even in the dark about this whole thing and Valve and new game companies really make an effort in asserting their product's disclosures about data transparency and the limit of how much a product can change to be considered a valid product resembelance upon purchase when curating their games in the future - I literally bought a third person survival shooter and ended up with a rootkit chinese FPS.
Sincerely, a pissed off customer - who unlike the majority is concerned about my data privacy and I wish you're ever held accountable for changing sensitive contract topics such as User Privacy mid-release.
-----
EDIT:
For completely removing it from your system should you wish:

Locate the file Xhunter1.sysThis file is located in this directory: C:\Windows\xhunter1.sys

Remove the Registry Entry (regedit on command prompt)The entry is located here: HKEY_LOCAL_MACHINE > SYSTEM > ControlSet001 > Services > xhunter


For more information about XINGCOD3 and previous succesful abuses which show the malignant potential of the rootkit (kudos to Psychotropos):

- https://x86.re/blog/xigncode3-xhunter1.sys-lpe/
- https://github.com/Psychotropos/xhunter1_privesc
submitted by cosmonauts5512 to PUBATTLEGROUNDS [link] [comments]

What's new in macOS 11, Big Sur!

It's that time of year again, and we've got a new version of macOS on our hands! This year we've finally jumped off the 10.xx naming scheme and now going to 11! And with that, a lot has changed under the hood in macOS.
As with previous years, we'll be going over what's changed in macOS and what you should be aware of as a macOS and Hackintosh enthusiast.

Has Nvidia Support finally arrived?

Sadly every year I have to answer the obligatory question, no there is no new Nvidia support. Currently Nvidia's Kepler line is the only natively supported gen.
However macOS 11 makes some interesting changes to the boot process, specifically moving GPU drivers into stage 2 of booting. Why this is relevant is due to Apple's initial reason for killing off Web Drivers: Secure boot. What I mean is that secure boot cannot work with Nvidia's Web Drivers due to how early Nvidia's drivers have to initialize at, and thus Apple refused to sign the binaries. With Big Sur, there could be 3rd party GPUs however the chances are still super slim but slightly higher than with 10.14 and 10.15.

What has changed on the surface

A whole new iOS-like UI

Love it or hate it, we've got a new UI more reminiscent of iOS 14 with hints of skeuomorphism(A somewhat subtle call back to previous mac UIs which have neat details in the icons)
You can check out Apple's site to get a better idea:

macOS Snapshotting

A feature initially baked into APFS back in 2017 with the release of macOS 10.13, High Sierra, now macOS's main System volume has become both read-only and snapshotted. What this means is:
However there are a few things to note with this new enforcement of snapshotting:

What has changed under the hood

Quite a few things actually! Both in good and bad ways unfortunately.

New Kernel Cache system: KernelCollections!

So for the past 15 years, macOS has been using the Prelinked Kernel as a form of Kernel and Kext caching. And with macOS Big Sur's new Read-only, snapshot based system volume, a new version of caching has be developed: KernelCollections!
How this differs to previous OSes:

Secure Boot Changes

With regards to Secure Boot, now all officially supported Macs will also now support some form of Secure Boot even if there's no T2 present. This is now done in 2 stages:
While technically these security features are optional and can be disabled after installation, many features including OS updates will no longer work reliably once disabled. This is due to the heavy reliance of snapshots for OS updates, as mentioned above and so we highly encourage all users to ensure at minimum SecureBootModel is set to Default or higher.

No more symbols required

This point is the most important part, as this is what we use for kext injection in OpenCore. Currently Apple has left symbols in place seemingly for debugging purposes however this is a bit worrying as Apple could outright remove symbols in later versions of macOS. But for Big Sur's cycle, we'll be good on that end however we'll be keeping an eye on future releases of macOS.

New Kernel Requirements

With this update, the AvoidRuntimeDefrag Booter quirk in OpenCore broke. Because of this, the macOS kernel will fall flat when trying to boot. Reason for this is due to cpu_count_enabled_logical_processors requiring the MADT (APIC) table, and so OpenCore will now ensure this table is made accessible to the kernel. Users will however need a build of OpenCore 0.6.0 with commit bb12f5f or newer to resolve this issue.
Additionally, both Kernel Allocation requirements and Secure Boot have also broken with Big Sur due to the new caching system discussed above. Thankfully these have also been resolved in OpenCore 0.6.3.
To check your OpenCore version, run the following in terminal:
nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:opencore-version
If you're not up-to-date and running OpenCore 0.6.3+, see here on how to upgrade OpenCore: Updating OpenCore, Kexts and macOS

Broken Kexts in Big Sur

Unfortunately with the aforementioned KernelCollections, some kexts have unfortunately broken or have been hindered in some way. The main kexts that currently have issues are anything relying on Lilu's userspace patching functionality:
Thankfully most important kexts rely on kernelspace patcher which is now in fact working again.

MSI Navi installer Bug Resolved

For those receiving boot failures in the installer due to having an MSI Navi GPU installed, macOS Big Sur has finally resolved this issue!

New AMD OS X Kernel Patches

For those running on AMD-Based CPUs, you'll want to also update your kernel patches as well since patches have been rewritten for macOS Big Sur support:

Other notable Hackintosh issues

Several SMBIOS have been dropped

Big Sur dropped a few Ivy Bridge and Haswell based SMBIOS from macOS, so see below that yours wasn't dropped:
If your SMBIOS was supported in Catalina and isn't included above, you're good to go! We also have a more in-depth page here: Choosing the right SMBIOS
For those wanting a simple translation for their Ivy and Haswell Machines:

Dropped hardware

Currently only certain hardware has been officially dropped:

Extra long install process

Due to the new snapshot-based OS, installation now takes some extra time with sealing. If you get stuck at Forcing CS_RUNTIME for entitlement, do not shutdown. This will corrupt your install and break the sealing process, so please be patient.

X79 and X99 Boot issues

With Big Sur, IOPCIFamily went through a decent rewriting causing many X79 and X99 boards to fail to boot as well as panic on IOPCIFamily. To resolve this issue, you'll need to disable the unused uncore bridge:
You can also find prebuilts here for those who do not wish to compile the file themselves:

New RTC requirements

With macOS Big Sur, AppleRTC has become much more picky on making sure your OEM correctly mapped the RTC regions in your ACPI tables. This is mainly relevant on Intel's HEDT series boards, I documented how to patch said RTC regions in OpenCorePkg:
For those having boot issues on X99 and X299, this section is super important; you'll likely get stuck at PCI Configuration Begin. You can also find prebuilts here for those who do not wish to compile the file themselves:

SATA Issues

For some reason, Apple removed the AppleIntelPchSeriesAHCI class from AppleAHCIPort.kext. Due to the outright removal of the class, trying to spoof to another ID (generally done by SATA-unsupported.kext) can fail for many and create instability for others. * A partial fix is to block Big Sur's AppleAHCIPort.kext and inject Catalina's version with any conflicting symbols being patched. You can find a sample kext here: Catalina's patched AppleAHCIPort.kext * This will work in both Catalina and Big Sur so you can remove SATA-unsupported if you want. However we recommend setting the MinKernel value to 20.0.0 to avoid any potential issues.

Legacy GPU Patches currently unavailable

Due to major changes in many frameworks around GPUs, those using ASentientBot's legacy GPU patches are currently out of luck. We either recommend users with these older GPUs stay on Catalina until further developments arise or buy an officially supported GPU

What’s new in the Hackintosh scene?

Dortania: a new organization has appeared

As many of you have probably noticed, a new organization focusing on documenting the hackintoshing process has appeared. Originally under my alias, Khronokernel, I started to transition my guides over to this new family as a way to concentrate the vast amount of information around Hackintoshes to both ease users and give a single trusted source for information.
We work quite closely with the community and developers to ensure information's correct, up-to-date and of the best standards. While not perfect in every way, we hope to be the go-to resource for reliable Hackintosh information.
And for the times our information is either outdated, missing context or generally needs improving, we have our bug tracker to allow the community to more easily bring attention to issues and speak directly with the authors:

Dortania's Build Repo

For those who either want to run the lastest builds of a kext or need an easy way to test old builds of something, Dortania's Build Repo is for you!
Kexts here are built right after commit, and currently supports most of Acidanthera's kexts and some 3rd party devs as well. If you'd like to add support for more kexts, feel free to PR: Build Repo source

True legacy macOS Support!

As of OpenCore's latest versioning, 0.6.2, you can now boot every version of x86-based builds of OS X/macOS! A huge achievement on @Goldfish64's part, we now support every major version of kernel cache both 32 and 64-bit wise. This means machines like Yonah and newer should work great with OpenCore and you can even relive the old days of OS X like OS X 10.4!
And Dortania guides have been updated accordingly to accommodate for builds of those eras, we hope you get as much enjoyment going back as we did working on this project!

Intel Wireless: More native than ever!

Another amazing step forward in the Hackintosh community, near-native Intel Wifi support! Thanks to the endless work on many contributors of the OpenIntelWireless project, we can now use Apple's built-in IO80211 framework to have near identical support to those of Broadcom wireless cards including features like network access in recovery and control center support.
For more info on the developments, please see the itlwm project on GitHub: itlwm

Clover's revival? A frankestien of a bootloader

As many in the community have seen, a new bootloader popped up back in April of 2019 called OpenCore. This bootloader was made by the same people behind projects such as Lilu, WhateverGreen, AppleALC and many other extremely important utilities for both the Mac and Hackintosh community. OpenCore's design had been properly thought out with security auditing and proper road mapping laid down, it was clear that this was to be the next stage of hackintoshing for the years we have left with x86.
And now lets bring this back to the old crowd favorite, Clover. Clover has been having a rough time of recent both with the community and stability wise, with many devs jumping ship to OpenCore and Clover's stability breaking more and more with C++ rewrites, it was clear Clover was on its last legs. Interestingly enough, the community didn't want Clover to die, similarly to how Chameleon lived on through Enoch. And thus, we now have the Clover OpenCore integration project(Now merged into Master with r5123+).
The goal is to combine OpenCore into Clover allowing the project to live a bit longer, as Clover's current state can no longer boot macOS Big Sur or older versions of OS X such as 10.6. As of writing, this project seems to be a bit confusing as there seems to be little reason to actually support Clover. Many of Clover's properties have feature-parity in OpenCore and trying to combine both C++ and C ruins many of the features and benefits either languages provide. The main feature OpenCore does not support is macOS-only ACPI injection, however the reasoning is covered here: Does OpenCore always inject SMBIOS and ACPI data into other OSes?

Death of x86 and the future of Hackintoshing

With macOS Big Sur, a big turning point is about to happen with Apple and their Macs. As we know it, Apple will be shifting to in-house designed Apple Silicon Macs(Really just ARM) and thus x86 machines will slowly be phased out of their lineup within 2 years.
What does this mean for both x86 based Macs and Hackintoshing in general? Well we can expect about 5 years of proper OS support for the iMac20,x series which released earlier this year with an extra 2 years of security updates. After this, Apple will most likely stop shipping x86 builds of macOS and hackintoshing as we know it will have passed away.
For those still in denial and hope something like ARM Hackintoshes will arrive, please consider the following:
So while we may be heart broken the journey is coming to a stop in the somewhat near future, hackintoshing will still be a time piece in Apple's history. So enjoy it now while we still can, and we here at Dortania will still continue supporting the community with our guides till the very end!

Getting ready for macOS 11, Big Sur

This will be your short run down if you skipped the above:
For the last 2, see here on how to update: Updating OpenCore, Kexts and macOS
In regards to downloading Big Sur, currently gibMacOS in macOS or Apple's own software updater are the most reliable methods for grabbing the installer. Windows and Linux support is still unknown so please stand by as we continue to look into this situation, macrecovery.py may be more reliable if you require the recovery package.
And as with every year, the first few weeks to months of a new OS release are painful in the community. We highly advise users to stay away from Big Sur for first time installers. The reason is that we cannot determine whether issues are Apple related or with your specific machine, so it's best to install and debug a machine on a known working OS before testing out the new and shiny.
For more in-depth troubleshooting with Big Sur, see here: OpenCore and macOS 11: Big Sur
submitted by dracoflar to hackintosh [link] [comments]

Facebook Connect / Quest 2 - Speculations Megathread

EDIT: MAJOR UPDATE AT BOTTOM
Welcome to the "Speculations" mega thread for the device possibly upcoming in the Oculus Quest line-up. This thread will be a compilation of leaks, speculation & rumors updated as new information comes out.
Let's have some fun and go over some of the leaks, rumors, speculation all upcoming before Facebook Connect, we'll have a full mega thread going during Connect, but this should be a great thread for remembrance afterward.
Facebook Connect is happening September 16th at 10 AM PST, more information can be found here.

Leaks
In March, Facebook’s public Developer Documentation website started displaying a new device called ‘Del Mar’, with a ‘First Access’ program for developers.
In May, we got the speculated specs, based off the May Bloomberg Report (Original Paywall Link)
• “at least 90Hz” refresh rate
• 10% to 15% smaller than the current Quest
• around 20% lighter
• “the removal of the fabric from the sides and replacing it with more plastic”
• “changing the materials used in the straps to be more elastic than the rubber and velcro currently used”
• “a redesigned controller that is more comfortable and fixes a problem with the existing controller”

On top of that, the "Jedi Controller" drivers leaked, which are now assumed to be V3 Touch Controllers for the upcoming device.
The IMUs seem significantly improved & the reference to a 60Hz (vs 30hz) also seems to imply improved tracking.
It's also said to perhaps have improved haptics & analog finger sensing instead of binary/digital.
Now as of more recent months, we had the below leaks.
Render (1), (2)
Walking Cat seems to believe the device is called "Quest 2", unfortunately since then, his twitter has been taken down.
Real-life pre-release model photos
Possible IPD Adjustment
From these photos and details we can discern that:
Further features speculation based on firmware digging (thanks Reggy04 from the VR Discord for quite a few of these), as well as other sources, all linked.

Additional Sources: 1/2/3/4
Headset Codenames
We've seen a few codenames going around at this point, Reggy04 provided this screenshot that shows the following new codenames.
Pricing Rumors
So far, the most prevalent pricing we've seen is 299 for 64gb, and 399 for 256GB
These were shown by a Walmart page for Point Reyes with a release date of September 16 and a Target price leak with a street date of October 13th

Speculation
What is this headset?
Speculation so far is this headset is a Quest S or Quest 2
OR
This is a flat-out cheaper-to-manufacture, small upgrade to the Oculus Quest to keep up with demand and to iterate the design slowly.
Again, This is all speculation, nothing is confirmed or set in stone.
What do you think this is and what we'll see at FB Connect? Let's talk!
Rather chat live? Join us on the VR Discord
EDIT: MAJOR UPDATE - Leaked Videos.
6GB of RAM, XR2 Platform, "almost 4k display" (nearly 2k per eye) Source
I am mirroring all the videos in case they get pulled down.
Mirrors: Oculus Hand Tracking , Oculus Casting, Health and Safety, Quest 2 Instructions, Inside the Upgrade
submitted by charliefrench2oo8 to OculusQuest [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

Allow me to explain how traditional game "patching" as on consoles and even PC by game developers is not always required for games to run better on Stadia over time... Stadia engineers can do it on their own to ever improve the visual quality of individual library titles.

I've been mulling over how to write this post without it getting too wordy and just turn people away from the topic... but I feel it's important for people to consider in regards to investing in game purchases on Stadia. Even though a years-old game is ported to Stadia by a 3rd party publisher, it is not abandoned by that developer after game engine code changes are required... at that point the Stadia team can take over tweaking the performance of the game as the Linux OS Kernel / Vulkan API / eventually hardware undergo improvements over time.
I've seen heated comments/reactions in these parts when people start noticing older games suddenly looking or performing better... even though there is no sign of a game patch from the developer or announcement that such a thing has happened. (FFXV.) I'm hear to explain how this is totally possible.
(Disclaimer: I've been a gaming platform tester for 13 years, a platform based from GenToo Linux Kernel. This year I have just branched directly into OS Kernel / Package testing itself.)
A software package / game is made up of not only game code and pretty graphics. Another fairly big piece of the puzzle is configuration files. Especially in the Linux world. Another thing about Linux is it never sits still. It's open source and ever growing and improving through constant iteration by engineers around the world. This includes the Vulkan API itself. Stadia's platform and Vulkan API has likely undergone dozens if not hundreds of iterations in the past year alone. It is CONSTANTLY improving, even if ever so slightly.
For comparison, a gaming console is a completely sealed environment. Not only does the hardware never change, but the OS and base Platform has very little wiggle room for improvement. Most significant improvements will happen within the first few years of a new console's life. But often the gains from that never spill over into the games themselves... but rather the Platform's UI interface and menu's, such as adding new features outside of the game. For things to change about a game at all, a patch MUST be delivered to the console. There is no other option, because the config files of individual games can't be touched in any other way.
On PC you often have access to these config files (at the devoloper's discretion of what they choose to expose of course). Many people know of how you can start digging into these settings and adjust number values and flip on/off flags to affect your game. But these configuration files have default values set by the developers that are expected to never really be touched by the players... so even when they do want to change something for the benefit of everyone, they need to issue a game patch.
Now on a Cloud platform such as Stadia, when a game is delivered by a developer to the platform, of course their game engine code (binaries) cannot be altered by anyone but the game developer themselves as usual... so if there is bugs in code, or game engine code improvements that can be done, the developer must deploy a game patch to make these changes, as we have seen and people would expect. However the configuration files which define how the game performs on the platform's hardware are completely exposed... and this is what the Stadia team most likely has FULL control over. So if the Vulkan API gets some improvements or code optimizations, and they can squeeze a little bit more performance out of the game, the Stadia team can go into these config files and adjust things accordingly.
Not only configurations but also the graphical assets themselves (media) can be swapped with more high-rez assets as well. Its also very possible that the publishers/devs provide Stadia with multiple different versions of quality of their media. Some higher rez textures that can be swapped in if the platform is optimized enough to handle them, etc.
Why would the Stadia team take on the management of all the games in such a way? Because it's absolutely in their best interest too. This is also a big favor towards the game publisher as well... Stadia does work to improve the game ultimately generating better reception and sales of these games producing revenue for both Stadia and the publisher.
Cloud platforms are a new animal in the gaming world. How the games are maintained over time can be done very differently than what we are used to with console and PC.
So naturally this turned into a wall of text but I couldn't do it any other way... some things simply need to be explained as clearly as possible to get across.
ltdr: As Stadia platform / Vulkan API improve constantly over time, Stadia engineers can tweak the configurations of ANY game to make them look/run better without the developers needing to be involved and patch the games.
submitted by Z3M0G to Stadia [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email [email protected] -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected] 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m [email protected]: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/[email protected]
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/[email protected]
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip latest.zip
$ sudo mv nextcloud /vawww/nextcloud/
$ sudo chown -R www-data:www-data /vawww/nextcloud
$ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \;
$ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \;
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:[email protected] TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

Forex Signals Reddit: top providers review (part 1)

Forex Signals Reddit: top providers review (part 1)

Forex Signals - TOP Best Services. Checked!

To invest in the financial markets, we must acquire good tools that help us carry out our operations in the best possible way. In this sense, we always talk about the importance of brokers, however, signal systems must also be taken into account.
The platforms that offer signals to invest in forex provide us with alerts that will help us in a significant way to be able to carry out successful operations.
For this reason, we are going to tell you about the importance of these alerts in relation to the trading we carry out, because, without a doubt, this type of system will provide us with very good information to invest at the right time and in the best assets in the different markets. financial
Within this context, we will focus on Forex signals, since it is the most important market in the world, since in it, multiple transactions are carried out on a daily basis, hence the importance of having an alert system that offers us all the necessary data to invest in currencies.
Also, as we all already know, cryptocurrencies have become a very popular alternative to investing in traditional currencies. Therefore, some trading services/tools have emerged that help us to carry out successful operations in this particular market.
In the following points, we will detail everything you need to know to start operating in the financial markets using trading signals: what are signals, how do they work, because they are a very powerful help, etc. Let's go there!

What are Forex Trading Signals?

https://preview.redd.it/vjdnt1qrpny51.jpg?width=640&format=pjpg&auto=webp&s=bc541fc996701e5b4dd940abed610b59456a5625
Before explaining the importance of Forex signals, let's start by making a small note so that we know what exactly these alerts are.
Thus, we will know that the signals on the currency market are received by traders to know all the information that concerns Forex, both for assets and for the market itself.
These alerts allow us to know the movements that occur in the Forex market and the changes that occur in the different currency pairs. But the great advantage that this type of system gives us is that they provide us with the necessary information, to know when is the right time to carry out our investments.
In other words, through these signals, we will know the opportunities that are presented in the market and we will be able to carry out operations that can become quite profitable.
Profitability is precisely another of the fundamental aspects that must be taken into account when we talk about Forex signals since the vast majority of these alerts offer fairly reliable data on assets. Similarly, these signals can also provide us with recommendations or advice to make our operations more successful.

»Purpose: predict movements to carry out Profitable Operations

In short, Forex signal systems aim to predict the behavior that the different assets that are in the market will present and this is achieved thanks to new technologies, the creation of specialized software, and of course, the work of financial experts.
In addition, it must also be borne in mind that the reliability of these alerts largely lies in the fact that they are prepared by financial professionals. So they turn out to be a perfect tool so that our investments can bring us a greater number of benefits.

The best signal services today

We are going to tell you about the 3 main alert system services that we currently have on the market. There are many more, but I can assure these are not scams and are reliable. Of course, not 100% of trades will be a winner, so please make sure you apply proper money management and risk management system.

1. 1000pipbuilder (top choice)

Fast track your success and follow the high-performance Forex signals from 1000pip Builder. These Forex signals are rated 5 stars on Investing.com, so you can follow every signal with confidence. All signals are sent by a professional trader with over 10 years investment experience. This is a unique opportunity to see with your own eyes how a professional Forex trader trades the markets.
The 1000pip Builder Membership is ordinarily a signal service for Forex trading. You will get all the facts you need to successfully comply with the trading signals, set your stop loss and take earnings as well as additional techniques and techniques!
You will get easy to use trading indicators for Forex Trades, including your entry, stop loss and take profit. Overall, the earnings target per months is 350 Pips, depending on your funding this can be a high profit per month! (In fact, there is by no means a guarantee, but the past months had been all between 600 – 1000 Pips).
>>>Know more about 1000pipbuilder
Your 1000pip builder membership gives you all in hand you want to start trading Forex with success. Read the directions and wait for the first signals. You can trade them inside your demo account first, so you can take a look at the performance before you make investments real money!
Features:
  • Free Trial
  • Forex signals sent by email and SMS
  • Entry price, take profit and stop loss provided
  • Suitable for all time zones (signals sent over 24 hours)
  • MyFXBook verified performance
  • 10 years of investment experience
  • Target 300-400 pips per month
Pricing:
https://preview.redd.it/zjc10xx6ony51.png?width=668&format=png&auto=webp&s=9b0eac95f8b584dc0cdb62503e851d7036c0232b
VISIT 1000ipbuilder here

2. DDMarkets

Digital Derivatives Markets (DDMarkets) have been providing trade alert offerings since May 2014 - fully documenting their change ideas in an open and transparent manner.
September 2020 performance report for DD Markets.
Their manner is simple: carry out extensive research, share their evaluation and then deliver a trading sign when triggered. Once issued, daily updates on the trade are despatched to members via email.
It's essential to note that DDMarkets do not tolerate floating in an open drawdown in an effort to earnings at any cost - a common method used by less professional providers to 'fudge' performance statistics.
Verified Statistics: Not independently verified.
Price: plans from $74.40 per month.
Year Founded: 2014
Suitable for Beginners: Yes, (includes handy to follow trade analysis)
VISIT
-------

3. JKonFX

If you are looking or a forex signal service with a reliable (and profitable) music record you can't go previous Joel Kruger and the team at JKonFX.
Trading performance file for JKonFX.
Joel has delivered a reputable +59.18% journal performance for 2016, imparting real-time technical and fundamental insights, in an extremely obvious manner, to their 30,000+ subscriber base. Considered a low-frequency trader, alerts are only a small phase of the overall JKonFX subscription. If you're searching for hundreds of signals, you may want to consider other options.
Verified Statistics: Not independently verified.
Price: plans from $30 per month.
Year Founded: 2014
Suitable for Beginners: Yes, (includes convenient to follow videos updates).
VISIT

The importance of signals to invest in Forex

Once we have known what Forex signals are, we must comment on the importance of these alerts in relation to our operations.
As we have already told you in the previous paragraph, having a system of signals to be able to invest is quite advantageous, since, through these alerts, we will obtain quality information so that our operations end up being a true success.

»Use of signals for beginners and experts

In this sense, we have to say that one of the main advantages of Forex signals is that they can be used by both beginners and trading professionals.
As many as others can benefit from using a trading signal system because the more information and resources we have in our hands. The greater probability of success we will have. Let's see how beginners and experts can take advantage of alerts:
  • Beginners: for inexperienced these alerts become even more important since they will thus have an additional tool that will guide them to carry out all operations in the Forex market.
  • Professionals: In the same way, professionals are also recommended to make use of these alerts, so they have adequate information to continue bringing their investments to fruition.
Now that we know that both beginners and experts can use forex signals to invest, let's see what other advantages they have.

»Trading automation

When we dedicate ourselves to working in the financial world, none of us can spend 24 hours in front of the computer waiting to perform the perfect operation, it is impossible.
That is why Forex signals are important, because, in order to carry out our investments, all we will have to do is wait for those signals to arrive, be attentive to all the alerts we receive, and thus, operate at the right time according to the opportunities that have arisen.
It is fantastic to have a tool like this one that makes our work easier in this regard.

»Carry out profitable Forex operations

These signals are also important, because the vast majority of them are usually quite profitable, for this reason, we must get an alert system that provides us with accurate information so that our operations can bring us great benefits.
But in addition, these Forex signals have an added value and that is that they are very easy to understand, therefore, we will have a very useful tool at hand that will not be complicated and will end up being a very beneficial weapon for us.

»Decision support analysis

A system of currency market signals is also very important because it will help us to make our subsequent decisions.
We cannot forget that, to carry out any type of operation in this market, previously, we must meditate well and know the exact moment when we will know that our investments are going to bring us profits .
Therefore, all the information provided by these alerts will be a fantastic basis for future operations that we are going to carry out.

»Trading Signals made by professionals

Finally, we have to recall the idea that these signals are made by the best professionals. Financial experts who know perfectly how to analyze the movements that occur in the market and changes in prices.
Hence the importance of alerts, since they are very reliable and are presented as a necessary tool to operate in Forex and that our operations are as profitable as possible.

What should a signal provider be like?

https://preview.redd.it/j0ne51jypny51.png?width=640&format=png&auto=webp&s=5578ff4c42bd63d5b6950fc6401a5be94b97aa7f
As you have seen, Forex signal systems are really important for our operations to bring us many benefits. For this reason, at present, there are multiple platforms that offer us these financial services so that investing in currencies is very simple and fast.
Before telling you about the main services that we currently have available in the market, it is recommended that you know what are the main characteristics that a good signal provider should have, so that, at the time of your choice, you are clear that you have selected one of the best systems.

»Must send us information on the main currency pairs

In this sense, one of the first things we have to comment on is that a good signal provider, at a minimum, must send us alerts that offer us information about the 6 main currencies, in this case, we refer to the euro, dollar, The pound, the yen, the Swiss franc, and the Canadian dollar.
Of course, the data you provide us will be related to the pairs that make up all these currencies. Although we can also find systems that offer us information about other minorities, but as we have said, at a minimum, we must know these 6.

»Trading tools to operate better

Likewise, signal providers must also provide us with a large number of tools so that we can learn more about the Forex market.
We refer, for example, to technical analysis above all, which will help us to develop our own strategies to be able to operate in this market.
These analyzes are always prepared by professionals and study, mainly, the assets that we have available to invest.

»Different Forex signals reception channels

They must also make available to us different ways through which they will send us the Forex signals, the usual thing is that we can acquire them through the platform's website, or by a text message and even through our email.
In addition, it is recommended that the signal system we choose sends us a large number of alerts throughout the day, in order to have a wide range of possibilities.

»Free account and customer service

Other aspects that we must take into account to choose a good signal provider is whether we have the option of receiving, for a limited time, alerts for free or the profitability of the signals they emit to us.
Similarly, a final aspect that we must emphasize is that a good signal system must also have excellent customer service, which is available to us 24 hours a day and that we can contact them at through an email, a phone number, or a live chat, for greater immediacy.
Well, having said all this, in our last section we are going to tell you which are the best services currently on the market. That is, the most suitable Forex signal platforms to be able to work with them and carry out good operations. In this case, we will talk about ForexPro Signals, 365 Signals and Binary Signals.

Forex Signals Reddit: conclusion

To be able to invest properly in the Forex market, it is convenient that we get a signal system that provides us with all the necessary information about this market. It must be remembered that Forex is a very volatile market and therefore, many movements tend to occur quickly.
Asset prices can change in a matter of seconds, hence the importance of having a system that helps us analyze the market and thus know, what is the right time for us to start operating.
Therefore, although there are currently many signal systems that can offer us good services, the three that we have mentioned above are the ones that are best valued by users, which is why they are the best signal providers that we can choose to carry out. our investments.
Most of these alerts are quite profitable and in addition, these systems usually emit a large number of signals per day with full guarantees. For all this, SignalsForexPro, Signals365, or SignalsBinary are presented as fundamental tools so that we can obtain a greater number of benefits when we carry out our operations in the currency market.
submitted by kayakero to makemoneyforexreddit [link] [comments]

Looking for suggestions to improve encrypted /boot on Debian

Below is my install procedure
# For starting from install disc: # Advanced Options -> Rescue mode -> Execute shell in Installer environment # For this example we are assuming the drive with to setup is /dev/sda # Format virtual drive to have 1 large primary partition and mark it as bootable echo -e "o\nn\np\n1\n\n\na\nw" | fdisk /dev/sda # Encrypt entire volume # Default iter is 2000 and takes 22 seconds for grub to decrypt, adjust accordingly cryptsetup -v --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 50000 --use-random --verify-passphrase luksFormat --type luks1 /dev/sda1 # or if that takes too long to type: # cryptsetup -v -c aes-xts-plain64 -s 512 -h sha512 --use-random -y luksFormat --type luks1 /dev/sda1 # Open for formating cryptsetup open /dev/sda1 sda1_crypt mkfs.xfs /dev/mappesda1_crypt # If you are doing this from a standard debian system and you don't have debootstrap run the following: # apt install -y debootstrap coreutils # bootstrap core mount /dev/mappesda1_crypt /mnt debootstrap --arch amd64 bullseye /mnt http://ftp.us.debian.org/debian/ ## If you see: # E: Invalid Release file, no entry for main/binary-$ARCH/Packages # known good values are amd64 and i386 ## It means you provided an invalid Architecture name (like x86_64 or x86) # Chroot to get to work mount -t proc none /mnt/proc mount --bind /sys /mnt/sys mount --bind /dev /mnt/dev cp /etc/resolv.conf /mnt/etc/resolv.conf chroot /mnt/ 3. Basic setup ## Optionally you can add the following lines to /etc/apt/sources.list # deb http://ftp.us.debian.org/debian bullseye main # deb-src http://ftp.us.debian.org/debian bullseye main # deb http://ftp.debian.org/debian/ bullseye-updates main # deb-src http://ftp.debian.org/debian/ bullseye-updates main # deb http://security.debian.org/ bullseye/updates main # deb-src http://security.debian.org/ bullseye/updates main # *DO NOT FORGET TO SET ROOT PASSWORD!* passwd apt update apt install -y locales debconf # For rescue mode you need to run: # export TERM=vt100 dpkg-reconfigure locales # Restore old value: # export TERM=bterm apt install -y sudo vim mg apt purge -y nano select-editor # You need to set up your /etc/fstab: echo "/dev/mappesda1_crypt\t/\txfs\tdefaults\t0\t0" > /etc/fstab # Now to inform initramfs what to pass blkid | grep '/dev/sda1:' | echo "sda1_crypt\tUUID=$(awk -F'"' '{print $2}')\tnone\tluks" > /etc/crypttab # Make sure to install grub to /dev/sdb so that you don't mess up your desktop. grep -v rootfs /proc/mounts > /etc/mtab apt install -y grub-pc linux-base linux-image-amd64 cryptsetup ## If you see: # E: Sub-process /usbin/dpkg returned an error code (1) ## Don't worry about it, we are going to fix it later # Turn on grub's support for crypto echo 'GRUB_ENABLE_CRYPTODISK=y' >> /etc/default/grub grub-mkconfig -o /boot/grub/grub.cfg grub-install /dev/sda update-initramfs -u -k all ## If you see: # cryptsetup: WARNING: Invalid source device $UUID ## You forgot to prefix UUID= before your id in /etc/crypttab *You now can reboot and finsh the rest in the system* # Since we are manually setting everything up: export HOSTNAME=concernedgnu { cat <<-EOF 127.0.0.1 localhost 127.0.1.1 $HOSTNAME # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters EOF } >| /etc/hosts # Add our first user, set their password and add them to sudo useradd -m [User] passwd [User] usermod -G sudo -a [User] chsh [User] # Fix the broken packages apt install -f # Turn on network so we can add packages dhclient # Install posix standard tools apt update tasksel install standard # Add network-manger apt install -y network-manager nmtui # Remove need to type luks password twice dd bs=512 count=4 if=/dev/urandom of=/crypto_keyfile.bin chmod 400 /crypto_keyfile.bin cryptsetup luksAddKey /dev/sda1 /crypto_keyfile.bin # in /etc/crypttab replace none with /crypto_keyfile.bin blkid | grep '/dev/sda1:' | echo -e "sda1_crypt\tUUID=$(awk -F'"' '{print $2}')\t/crypto_keyfile.bin\tluks,keyscript=file" > /etc/crypttab # create /usshare/initramfs-tools/hooks/file (750 permissions) with the below content: :::::::::::::: START :::::::::::::: #!/bin/bash set -e PREREQ="cryptroot" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac . /usshare/initramfs-tools/hook-functions # Hooks for loading keyctl software into the initramfs copy_exec /crypto_keyfile.bin exit 0 :::::::::::::: END :::::::::::::: chmod 750 /usshare/initramfs-tools/hooks/file # and then create it's match in /lib/cryptsetup/scripts/file (750 permissions) with the following content: :::::::::::::: START :::::::::::::: #!/bin/sh decrypt_file () { cat "$1" return 0 } if [ -z "$1" ]; then echo "$0: missing key as argument" >&2 exit 1 fi decrypt_file "$1" exit $? :::::::::::::: END :::::::::::::: chmod 750 /lib/cryptsetup/scripts/file update-initramfs -u -k all # You can verify that the keyfile and /lib/cryptsetup/scripts/file are both in the initrd with: lsinitramfs /boot/initrd.img-* | less *You may now logout and finish the rest as user* # Install Desktop utils if required sudo apt install -y xinit slim i3-wm dmenu x11-xserver-utils # If you skipped the guix option for space reasons: # sudo apt install -y gpg rxvt-unicode emacs git tig most firefox-esr 
submitted by concernedgnu20190124 to linuxadmin [link] [comments]

another take on Getting into Devops as a Beginner

I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.

Background

While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it.
Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin.
Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.

Certifications

People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience.
Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.

Tools and Experimentation

While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them.
Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.

Programming Languages

Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.

Expanding your knowledge

As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level.
The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them.
Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
submitted by jamabake to devops [link] [comments]

Recover Stolen Bitcoin and Cryptocurrency

Recover Stolen Bitcoin and Cryptocurrency

Recover Stolen Bitcoin and Cryptocurrency
Cryptocurrencies are a high priority target for cybercriminals. Whether targeting your wallet directly or hacking the exchanges once cybercriminals have access to your currency you need to act fast! You can also recover money lost to binary options.
Lost Bitcoin? Stolen Cryptocurrency? Hacked virtual currency account - Follow these steps now!
  1. Report to appropriate authorities - Report the case to the appropriate authorities, for them to be able to have it looked into.
  2. Change your login details - If you are still able to login to your account then follow the normal procedure to reset your password and other security information. Enable two-factor authentication. This should lock the criminal out of the account.
  3. Notify the exchange/provider - If you have purchased or are storing your currency with a service provider then let them know about the breach and the fraudulent transactions. They may be able to retain some information about the transaction that could come in useful in an investigation.
Will I Recover my Stolen Bitcoin?
Once your virtual currency has been stolen it is incredibly unlikely that you will be able to recover it. In theory, it’s possible to track your stolen bitcoin by monitoring the blockchain – in practice, however, this is made difficult by both the anonymous nature of the currency and the fact that the thief will most likely use a bitcoin exchange to trade the currency for normal cash straight away. However, money does leave a trail and you may be able to follow it to the identity of the criminal.
How to Recover Stolen Bitcoin and Cryptocurrency
  1. Check your devices for malware - It is worth considering that a malicious software infection may have led to the hacker accessing your currency. Scan the devices you use to handle your currency and make sure they are clean. You can follow our guide on checking for and removing malware here.
  2. Call your bank - If the transaction had related costs that hit your bank accounts - such as transaction fees or deposits - then contact your bank immediately and let them know it is an unauthorized/fraudulent transaction.
  3. Follow the money - You can follow the transactions of the wallet address that your funds were scammed into. If you notice the scammer attempt to transfer funds from the wallet to cryptocurrency exchanges to sell for fiat currency, report to the relevant exchanges immediately. An opportunity to catch the scammer is to follow the money trail through blockchain explorers and trace your lost funds. You can use browser-based blockchain exploring software such as https://blockexplorer.com to ‘follow’ the payment through to an end bitcoin address. Once you have this address you can check whether the owners of the end address(es) appear on http://bitcoinwhoswho.com/. In order to trade crypto to regular money on most popular exchanges, the thief would need to submit KYC (Know Your Customer) information, such as names, addresses, and ID information. Contacting the exchanges can potentially help you to track down the scammer’s identity. This is another reason why it is important for you to file a police report as soon as the incident has taken place.
  4. Hire a Verified Recovery Expert - If you are willing to pay a decent amount for the return of your funds there are websites where you can post a bounty. Experienced blockchain searchers will investigate the theft and see if they can recover the funds for a price. Check out the list of verified recovery experts.
How to Avoid your Cryptocurrency Being Stolen in Future
  • Don’t talk publicly about owning virtual currency - If it is easy to work out that you own a cryptocurrency from your social media activity then you are much more likely to be a target.
  • Use multi-factor authentication - Ensure that you have multi-factor authentication enabled. Use an authenticator app rather than the SMS option. If the option to disable SMS authentication exists then do it.
  • Use a new email address and complex password to set up the account - A new, clean email address that you will only use for the virtual currency account is best. This reduces the chance of you being targeted via your email account.
  • Use a ‘cold-wallet’ - Keep your cryptocurrency off the internet, in a "cold wallet." "Cold wallet" is not a brand, it's a concept of storing bitcoins offline (not connected to the internet) so that it reduces the opportunities for hackers to steal via online techniques.
  • Spread your investments across exchanges - A number of exchanges have been breached. Spread your investments across exchanges to minimize the impact.
  • Get secure - Take time to improve your general online security. Use sites like getting Safe Online and Cyber Aware to understand what good security looks like and make changes. I was personally able to recover my lost bitcoin with the help of Express Recovery Pro – [email protected]
submitted by Babyelijah to u/Babyelijah [link] [comments]

binary options signals providers - best binary options signals providers binary options trading signals franco review Binary Options Strategy 2020  100% WIN GUARANTEED ... iq option signal : FREE SIGNAL provider for binary options ... Binary Option Accurate Signals Software //Boss Pro Bot V21 ... NEW 100% WIN BINARY OPTION FREE TRADING SIGNALS - YouTube Binary option signals - best binary options signals 2017 - best automated trading software 2017

4 Best Binary Options Signals Providers. BinaryOptionRobot. BinaryOptionRobot is by far the best binary signal provider. Read the review. VirtNext. Winning Rate: 86%+ Details. This signal provider is taking the binary options signal mini-industry by storm. VirtNext have proven time without number with diverse users that high-earning signals are ... The answer is both the above beliefs are accurate. Free trading signals can be both profitable and ineffective as well. Sometimes, signal providers provide free binary options signals on a test/trial basis. This can be provided in the form of free binary options signals software or just simply as signals. Binary option software provider. Binary Option Software Provider. So, they end up losing a lot of money on fake service providers, Software, Auto Traders and even shady brokers who will do anything to separate YOU from your CASH. The aim of this software is to automatize the trading of professional traders Recommended Binary Options Signals Providers Signal Hive gets BinaryOptions.net’s vote ... Binary options demo accounts are the best way to try both binary options trading, and specific brokers’ software and platforms – without needing to risk any money. You can get demo accounts at more than one broker, try them out and only deposit real money at the one you find best. It can also be useful to have accounts at more than one ... Binary Option Software Providers. This task of market breaks out because of the binary options brokers. The next person to properly registered users of time to as a true. Here we have matched with the most innovative technology companies associated with the multi-signature wallet balance. You sell a slew of the binary option binary option software providers allows any noob to the tools around ... Binary option signal providers give you the opportunity to let an experienced trader’s algorithm or judgment influence your trading decisions. The signal providers and its software do virtually ... The Binary Options Software is available for any device. Nowadays it is especially important for the private trader to have a flexible trading platform. This means that the platform should also be usable from the road. With the IQ Option Software, you can access your portfolio at any time, 24/7 a week. Download the app for your mobile device. The advantage is that you only need one access to ...

[index] [11197] [6364] [6185] [8850] [12001] [7973] [22559] [15132] [12829] [42]

binary options signals providers - best binary options signals providers

Binary Option Trading Signals Review: Binary Options Trading Signals is, by far, one of the best signals providers available on the market. The fact that you will struggle to find a single ... I wish you happy trading with the help of this software Please Subscribe to get more free good trading robot FREE : http://bit.ly/2DBZhzv You can start with ... Best binary options signals 2017 - best automated trading software 2017. Best binary options signals providers 2017. Binary options trading signals: binary strategy - trading options (binary ... Unlike expert option managed expertoption account trading services binary option signals - best binary options signals 2017 - best automated expert option trading software 2017 where the provider ... Do not miss! DEMO ACCOUNT: https://bit.ly/2Lq3NUt You can use this strategy in binary options to win every time but you have to keep following things in mind... More Strategies & Signals, please visit my twitter !!! iq option signal : FREE SIGNAL provider for binary options trading binary options 2017, iq option, iq ... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t...

https://binary-optiontrade.quimesarkslumexlot.tk