Steaks
Burgers
Seafood
Chinese
Thai
Italian
Turkish
Brunches
Outdoors
Food Markets
Rooftop Bars
100 Best
Bakeries
Bar List
Cheap Eats
Sunday Roasts
Chippies
Beefeater
Harvester
Tabletable
Brewersfayre
30 July 2019
28 July 2019
Cloud Providers
Most of Azure cloud service offerings are basically drop-in replacements for their biased standalone software tools. For Microsoft, it seems like Azure is an alternative way of vendor lock-in of the customer via the re-purposed cloud option which has so far proven to be useful through heavy gimmicky marketing. GCP, on the other hand, provides many alternatives for big data but with very ineffective pricing, lacking business critical reliability, security constraints, lots of options to re-invent the wheel with vendor lock-in, still limited in SQL use cases, and their limited over all services. AWS has proven to be a very effective pricing model as well as catering to a wide range of services to cover business needs including a strong reliability and flexible options for management of services. For most organizations, especially for data science work, AWS is the go to cloud solution. Azure and GCP still lag behind considerably in reliability, cloud service offerings, ineffective pricing, and the biggest concern being vendor lock-in. In many cases, cloud providers are limited by their mission statements of what they are trying to achieve through their solutions to businesses and their future progressive infrastructure development goals. For Microsoft, windows is the ultimate success story which one can see has evolved in parallel from Apple. But, linux has become the defacto operating system for the cloud and for obvious reasons. Data as a commodity is a valuable asset to most organizations. And, the management of risk in security and compliance is an enduring struggle for many organizations. Especially, in meeting GDPR compliance many organizations will want a transparent data lineage. Can one trust storage and processing of data on GCP? All Google services converge to some degree or another and get indexed by their search engine. Invariably, the cost and risk of using the third-party cloud infrastructure vs in-house infrastructure will always be a concern for companies to weigh out. It seems, in the long-run, organizations will take back control of their own data storage and processing needs. The peddles of trends are towards portable, smarter, and stackable private cloud ownerships, more flexibility in management of infrastructure, and with virtualization modes at an affordable cost. While start-ups may find it easier with reducing setup costs by leveraging third-party infrastructure. As companies grow with their market value of their products, they may increase their independence by eventually moving away from the third-party cloud dependency to their own in-house converged infrastructure allowing for greater flexibility to meet consumer expectations and the demands of their product services - enterprise enablement drives creative and profitable growth.
Labels:
artificial intelligence
,
big data
,
Cloud
,
data science
,
devops
,
distributed systems
,
microservices
25 July 2019
Deep Learning Datasets
Deep Learning Datasets
Deep Learning Datasets 2
Skymind Datasets
Dataset List
List of Wikipedia Datasets
Tensorflow Datasets
Google Datasearch (depends on how up-to-date are their indexed dataset results)
Deep Learning Datasets 2
Skymind Datasets
Dataset List
List of Wikipedia Datasets
Tensorflow Datasets
Google Datasearch (depends on how up-to-date are their indexed dataset results)
24 July 2019
Everyday Robots
Robots, over the years, have proven themselves as worthy candidates for replacing the manual mundane labor intensive work for humans both for commercial and home. Not only do robots work more effectively, they are also extremely productive. In general, robots can be applied to most specialist labor so they can be trained to be good at a particular aspect of work. But, they may not be sufficiently capable yet to do multiple things through adaptability in multi-class transfer learning. The following highlight some examples of robot use cases.
- automotive breakdown repair man/woman
- home and office cleaner
- rubbish disposal
- grocery shopper
- home and office security officer/inspector
- laundry service
- cook (chef)
- critic / reviewer
- gardener
- table setter
- mechanical turk
- post man/woman
- babysitter
- mystery shopper
- chauffeur
- home and office mover
- handy man/woman
- telephone/broadband installer/repair man/woman
- call centre agent
- lollypop man/woman
- school teacher
- office secretary
- family mediator
- office mediator
- crop duster
- nursing home nurse
- doctor and nurse
- nanny
- lawyer
- accountant
- assembly line worker
- dentist
- data entry clerk
- journalist
- financial analyst
- comedian
- musician
- artist
- telemarketer
- paramedic
- commercial and defence pilots
- public transport worker
- rail repair
- air traffic controller
- land traffic controller
- sea traffic controller
- metrologist
- kitchen porter
- crop pickers
- police man/woman
- fire man/woman
- immigration/border controller
- politician
- director
- photographer
- creative writer
- curator
- cheerleader
- gamer
- construction worker
- programmer
- logging worker
- fisher man/woman
- steel worker
- street sweeper
- refuse collector
- carpenter
- stunt man/woman
- courier
- wrestler
- boxer
- sports man/woman
- recycle waste worker
- power worker
- farmer
- roofer
- astronaut
- army & military officer
- bodyguard
- slaughterhouse worker
- mechanic
- metalcrafter
- search & rescue
- special forces (SAS, Delta Force, Seal, etc)
- sanitation worker
- land mine remover
- miner
- bush pilot
- lumberjack
- librarian
- human resources assistant
- salesman
- editor
- dance instructor
- bus conductor
- tourist guide
- stewardess
- cashier
- store replenisher
- data center operator
- taxi cab driver
- train driver
- lorry driver
- customer service advisor
- electrician
- vehicle washer
- bed maker
- bathroom cleaner
- pet walker
- oilfield driver
- derrick hand
- roustabout
- offshore diver
- rodent killer
- insect killer
- therapist
- architect
- actor
- backup singer
- backup dancer
- house builder
- waiter
- presenter
- manager
- hacker
- stripper (exotic dancer)
- sex worker
- hairdresser
- makeup artist
- fashion designer
- cameraman
- researcher
- chemist
- pharmacist
- landscapist
- baker
- ship builder
- car maker
- broadcast technician
- hotel helpdesk
- store helpdesk
- mall helpdesk
- site assistant
- tailor
- tutor
- pet trainer
- cartoonist
- reporter
- moderator
- painter
- plumber
- auditor
- financial trader
- financial broker
- financial advisor
- compliance advisor
- fraud advisor
- risk advisor
- surveillance agent
- social media agent
- bricklayer
- choreographer
- actuarian
- physiotherapist
- tea/coffee maker
- pizza maker
- burger maker
- welder
- surveyor
- surgeon
- glazier
- tiler
- stonemason
- optician
- tool maker
- artisan
- sonographer
- radio technician
- sports coach
- bartender / barmaid
- bellboy
- paperboy
- drain inspector
- pet feeder
21 July 2019
19 July 2019
13 July 2019
Lucid Pipeline
Most AI solutions can be built as pipelined implementations with various sources to sinks from a set of generalizable models. Invariably, knowledge graph will act as a key layer for evolvable feature engineering that can be translated into ontological vectors and fed into AI models. Split the pipeline as a lucid funnel, lucid reactor, lucid ventshaft, and lucid refinery using a loose analogy of a distillation process. The following components highlight the key abstractions:
AI/DS Engine Layers:
- Disc (frontends - discovery/visualization layer)
- Docs (live specs via swagger, etc - documentation layer)
- APIs (proxy/gateway services connected with elasticsearch or solr - application layer)
- DS (models and semantics - AI layer)
- Eval (benchmarks, workbench and metrics - evaluation layer)
- Human (optional human in the loop - human/annotation layer)
- Tests (load, unit, uat, service, etc - testing layer)
- Admin (control for access management, operations workloads, and automation - administration layer)
- Funnel (ingestion, pre-process, post-process layer using brokers like Kafka/Kinesis)
- Reactor (reactive processes - workflow/transformational layer - via Spark, Beam, Flink, Dask, etc)
- Ventshaft (fuzzy matches, distance matches, probabilistic filters, relational matches, clusters, fake filters, fake matches, feature selection filters, component factors, informed searches, uninformed searches, string matches, projection filters, samplings, tree searches, validations, verifications - functional/utility layer)
- Refinery (context types, objects, attributes and methods as blueprints - entity/object layer)
- Datapiles (indexed data sources as services for document/column/graph stores - data access layer)
- Conf (environment configurations for nginx, etc - configuration layer)
- Cloud (connected services for AWS/GCP orchestration - infrastructure/platform layer)
7 July 2019
5 July 2019
2 July 2019
Subscribe to:
Posts
(
Atom
)