com.salmanff.vulog

The developer has not defined a format for this record.

author: Sal Khan
_owner: null
description: Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools -- including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher -- and demos some exciting new features for their educational chatbot, Khanmigo.
domainApp: ted talks
image: null
keywords: ["ted","talks","technology","education","ai","teaching","kids"]
path: null
purl: https://www.ted.com/talks/sal_khan_the_amazing_ai_super_tutor_for_students_and_teachers/c
title: Sal Khan: The amazing AI super tutor for students and teachers
type: null
url: https://www.ted.com/talks/sal_khan_the_amazing_ai_super_tutor_for_students_and_teachers/c
vSearchWords: ["https www ted com talks sal khan the amazing ai super tutor for students and teachers c","ted","talks","sal","khan","the","amazing","ai","super","tutor","for","students","and","teachers","founder","ceo","of","academy","thinks","artificial","intelligence","could","spark","greatest","positive","transformation","education","has","ever","seen","he","shares","opportunities","sees","educators","to","collaborate","with","tools","including","potential","a","personal","every","student","an","teaching","assistant","teacher","demos","some","exciting","new","features","their","educational","chatbot","khanmigo","technology","kids"]
date: null
vNote: This is the future of education.. and work too may be. 
vulog_favIconUrl: null
vHighlights: []
vComments: null
_id: @salman/com.salmanff.vulog.marks/645b7d7e6658df0014761b9c
_app_table: com.salmanff.vulog.marks
__date_published: 5/10/2023
_original_id: "645b7d7e6658df0014761b9c"
_fields: {"fj_modified_locally":{"type":"string","description":" "},"url":{"type":"string","description":" "},"domainApp":{"type":"string","description":" "},"type":{"type":"string","description":" "},"author":{"type":"string","description":" "},"description":{"type":"string","description":" "},"keywords":{"type":"string","description":" "},"other":{"type":"string","description":" "},"vNote":{"type":"string","description":" "},"vStars":{"type":"string","description":" "},"vHighlights":{"type":"object","description":" "},"vComments":{"type":"object","description":" "},"vSource":{"type":"string","description":""},"vulog_msg_type":{"type":"string","description":" "},"vCreated":{"type":"string","description":" "},"vulog_ttl_time":{"type":"string","description":" "},"vulog_max_scroll":{"type":"string","description":" "},"vuLog_height":{"type":"string","description":" "},"temp_unknown_tags":{"type":"string","description":" "},"tabid":{"type":"string","description":" "},"fj_deleted":{"type":"string","description":" "},"purl":{"type":"string","description":" "},"title":{"type":"string","description":" "}}
Published by: salman on 5/10/2023
com.salmanff.vulog

The developer has not defined a format for this record.

author: null
_owner: null
description: The wave of enthusiasm around generative networks feels like another Imagenet moment - a step change in what ‘AI’ can do that could generalise far beyond the cool demos. What can it create, and where are the humans in the loop?
domainApp: benedict evans
image: http://static1.squarespace.com/static/50363cf324ac8e905e7df861/t/639997a380800648445acf3f/1671010211384/B4AAC213-3764-4A49-A1F1-CA18B3368CF0.jpeg?format=1500w
keywords: []
path: null
purl: https://www.ben-evans.com/benedictevans/2022/12/14/ChatGPT-imagenet
title: ChatGPT and the Imagenet moment — Benedict Evans
type: article
url: https://www.ben-evans.com/benedictevans/2022/12/14/ChatGPT-imagenet
date: null
vComments: null
vulog_favIconUrl: null
vHighlights: [{"id":"1671658125416-546","vCreated":1671658125416,"focusNode":[{"nodeId":"block-764303a9f7f81c1c59ae"},{"index":0,"type":"div"},{"index":3,"type":"p"},{"index":0,"type":"text"}],"focusOffset":861,"anchorNode":[{"nodeId":"block-764303a9f7f81c1c59ae"},{"index":0,"type":"div"},{"index":3,"type":"p"},{"index":0,"type":"text"}],"string":"Instead of people trying to write rules for the machine to apply to data, we give the data and the answers to the machine and it calculates the rules. This works tremendously well, and generalises far beyond images, but comes with the inherent limitation that such systems have no structural understanding of the question - they don’t necessarily have any concept of eyes or legs, let alone ‘cats’. ","color":"green","tester":"","container":[{"nodeId":"block-764303a9f7f81c1c59ae"},{"index":0,"type":"div"},{"index":3,"type":"p"}],"displayErr":false,"anchorOffset":462,"vNote":""},{"id":"1671658242756-848","vCreated":1671658242756,"focusNode":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":1,"type":"p"},{"index":6,"type":"text"}],"focusOffset":720,"anchorNode":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":1,"type":"p"},{"index":6,"type":"text"}],"string":"If I ask for ‘the chest burster scheme in Alien as directed by Wes Anderson’ and get a 92% accurate output, no-one will complain that Sigourney Weaver had a different hair style. But if I ask for some JavaScript, or a contract, I might get a ‘98% accurate’ result that looks a lot like the JavaScript I asked for, but the 2% error might break the whole thing. To put this another way, some kinds of request don’t really have wrong answers, some can be roughly right, and some can only be precisely right or wrong, and cannot be ‘98% correct’.","color":"green","tester":"","container":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":1,"type":"p"}],"displayErr":false,"anchorOffset":178,"vNote":""},{"id":"1671658608529-721","vCreated":1671658608529,"focusNode":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":10,"type":"p"},{"index":0,"type":"text"}],"focusOffset":788,"anchorNode":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":10,"type":"p"},{"index":0,"type":"text"}],"string":"Yahoo tried paying people to catalogue the entire web one site at a time, and that was unscalable. Google, on one side, is based on the patterns of aggregate human behaviour of the web, and on the other side it gives you ten results and makes you pick one - manual curation by billions of users. The index is made by machine, but the corpus it indexes is made by people and the results are chosen by people. In much the same way, generative networks, so far, rely on one side on patterns in things that people already created, and on the other on people having new ideas to type into the prompt and picking the ones that are good","color":"green","tester":"","container":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":10,"type":"p"}],"displayErr":false,"anchorOffset":159,"vNote":""},{"id":"1671658999339-967","vCreated":1671658999340,"focusNode":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":11,"type":"p"},{"index":0,"type":"text"}],"focusOffset":575,"anchorNode":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":11,"type":"p"},{"index":0,"type":"text"}],"string":"But the other side of this is that ML gives you not infinite interns but one intern with super-human speed and memory - one intern who can listen to a billion calls and say ‘you know, after 300m calls, I noticed a pattern you didn’t know about…’","vComments":[{"creator":"salman@https://www.salmanff.com","vCreated":1671659590832,"text":"Actually what ML gives you is not only depth (ie the ability to “listen to - billion calls”) but also breadth: the ability to read a thousand books on psychology and persuasion tactics and other unrelated subjects and tie them into its conclusions. What is human ingenuity other than adding patterns from two seemingly unrelated subjects together? Perhaps ML will be brilliant - or perhaps our human ingenuity is just much simpler than assumed ."}],"color":"green","tester":"","container":[{"nodeId":"block-yui_3_17_2_1_1671005354407_85567"},{"index":0,"type":"div"},{"index":11,"type":"p"}],"anchorOffset":330,"displayErr":false,"vNote":""}]
_id: @salman/com.salmanff.vulog.marks/63a1e9d079672b00161a3f57
_app_table: com.salmanff.vulog.marks
__date_published: 12/21/2022
_original_id: "63a1e9d079672b00161a3f57"
_fields: {"fj_modified_locally":{"type":"string","description":" "},"url":{"type":"string","description":" "},"domainApp":{"type":"string","description":" "},"type":{"type":"string","description":" "},"author":{"type":"string","description":" "},"description":{"type":"string","description":" "},"keywords":{"type":"string","description":" "},"other":{"type":"string","description":" "},"vNote":{"type":"string","description":" "},"vStars":{"type":"string","description":" "},"vHighlights":{"type":"object","description":" "},"vComments":{"type":"object","description":" "},"vSource":{"type":"string","description":""},"vulog_msg_type":{"type":"string","description":" "},"vCreated":{"type":"string","description":" "},"vulog_ttl_time":{"type":"string","description":" "},"vulog_max_scroll":{"type":"string","description":" "},"vuLog_height":{"type":"string","description":" "},"temp_unknown_tags":{"type":"string","description":" "},"tabid":{"type":"string","description":" "},"fj_deleted":{"type":"string","description":" "},"purl":{"type":"string","description":" "},"title":{"type":"string","description":" "}}
Published by: salman on 12/21/2022
>>
Posted by salman on 12/8/2022

Me: Write a blog post comparing chatGPT to ETFs

ChatGPT and ETFs are two very different things, so it's difficult to compare them directly. However, both are interesting in their own ways, and understanding the differences between them can be helpful in understanding how each can be used.

[.. the rest of ChatGPT's answer read like an 7th grader's essay... so I thought to explain further... ]


Me: chatGPT is based on all the systemic knowledge it can find, but probably not yet create new or idiosyncratic knowledge. Human ingenuity and talent can then use that as a base and focus on improving on the margins, purely on the idiosyncrasies of new knowledge. ETFs are also based on a the system knowledge of the market as a whole - if you want to invest, buying a market ETF gives you average market performance. But human ingenuity can be used to make bets on idiosyncrasies - particularly mispriced securities that will outperform the market. Does that make sense?

Yes, that makes sense. ChatGPT is a large language model trained by OpenAI, which means that it has been trained on a huge amount of data and can generate text that sounds like it was written by a human. However, it is not capable of creating new knowledge or making unique insights. It can only generate text based on the knowledge it has been trained on.

Similarly, ETFs are based on the systemic knowledge of the market as a whole. Buying a market ETF will give you exposure to the overall market, but it won't allow you to make unique bets or take advantage of mispriced securities. It will simply track the performance of the market index or basket of assets that the ETF is based on.

However, both ChatGPT and ETFs can be used as a starting point for human ingenuity and talent. In the case of ChatGPT, a person can use the generated text as a starting point and then focus on improving it and adding their own unique insights. With ETFs, a person can use the market knowledge provided by the ETF as a starting point and then focus on identifying mispriced securities and making bets on them in order to potentially outperform the market. In both cases, human ingenuity and talent can be used to build on the systemic knowledge provided by ChatGPT or ETFs in order to create something new and unique.



...
labels:
com.salmanff.vulog

The developer has not defined a format for this record.

author: null
_owner: null
description: What Elon Musk got wrong about Twitter, journalists and VCs got wrong about FTX, and Peter Thiel got wrong about crypto and AI — and why I made many of the same mistakes along the way.
domainApp: stratechery by ben thompson
image: https://i0.wp.com/stratechery.com/wp-content/uploads/2022/11/narratives-1.png?fit=1200%2C599&ssl=1
keywords: []
path: null
purl: https://stratechery.com/2022/narratives
title: Narratives – Stratechery by Ben Thompson
type: article
url: https://stratechery.com/2022/narratives/
date: null
vComments: null
vulog_favIconUrl: null
vHighlights: []
_id: @salman/com.salmanff.vulog.marks/6372cd3e31427a00161ad97c
_app_table: com.salmanff.vulog.marks
__date_published: 11/19/2022
_original_id: "6372cd3e31427a00161ad97c"
_fields: {"fj_modified_locally":{"type":"string","description":" "},"url":{"type":"string","description":" "},"domainApp":{"type":"string","description":" "},"type":{"type":"string","description":" "},"author":{"type":"string","description":" "},"description":{"type":"string","description":" "},"keywords":{"type":"string","description":" "},"other":{"type":"string","description":" "},"vNote":{"type":"string","description":" "},"vStars":{"type":"string","description":" "},"vHighlights":{"type":"object","description":" "},"vComments":{"type":"object","description":" "},"vSource":{"type":"string","description":""},"vulog_msg_type":{"type":"string","description":" "},"vCreated":{"type":"string","description":" "},"vulog_ttl_time":{"type":"string","description":" "},"vulog_max_scroll":{"type":"string","description":" "},"vuLog_height":{"type":"string","description":" "},"temp_unknown_tags":{"type":"string","description":" "},"tabid":{"type":"string","description":" "},"fj_deleted":{"type":"string","description":" "},"purl":{"type":"string","description":" "},"title":{"type":"string","description":" "}}
Published by: salman on 11/19/2022
>>
Posted by salman on 3/3/2022

In his widely-read post on web3 , Moxie Marlinspike re-iterates over and over again that “If there’s one thing I hope we’ve learned about the world, it’s that people do not want to run their own servers.” The image he seems to have in mind is of a big old-style PC with a cd-drive sitting under the messy desk of a nerd, who spends his(!) time making sure it is running smoothly. So of course, seen in that way, as Moxie notes, “Even nerds do not want to run their own servers at this point.” But then he goes on to say that “Even organizations building software full time do not want to run their own servers at this point.” And therein lies his logical flaw.  All these “organizations building software” do have their own servers – except that the servers are running in the cloud – some are even called ‘serverless’. Of course individuals don’t want to maintain physical personal servers sitting under their desks, but, much like those software organizations, we may all very well want to have our own personal servers in the cloud, if these were easy enough to install and maintain, and if we had a rich eco-system of apps running on them. 

This has been an ideal since the beginnings of web1, when, as Moxie himself says, we believed “that everyone on the internet would be both a publisher and consumer of content as well as a publisher and consumer of infrastructure” – effectively implying that we each have our own personal servers and control our data environment. Moxie says it is too “simplistic” to think that such an ideal is still attainable. Many web3 enthusiasts believe web3 can provide the answer. My view is that (regardless of the number we put in front of it), at some point, technology and technology mores will have advanced far enough to make such a personal server ecosystem feasible. We may be close to that point today.

But before laying out my reasoning, let me present two other minor critiques of Moxie’s excellent article.  

First is the way in which Moxie criticizes what he calls “protocols”, as being too slow compared to “platforms”. Although he may be right in the specific examples he notes – i.e. that private-company-led initiatives from the likes of Slack and WhatsApp have been able to move so much faster than standard open ‘protocols’ such as IRC – he makes this argument in a general context of web2 vs web3, and thus seems to imply that ALL open community-led projects will fail because private-led initiatives will inevitably innovate faster. But how could such a statement be reconciled with something like Linux, the quintessential open source project which is the most used operating system to access the web and to run web servers? How can one not think of html and the web itself, and javascript, each of which are open platforms, or simple agreed-upon conventions – upon which so much innovation has been created over the past decades? In defense of Moxie’s point, if you talk to anyone involved in the development of these over the years, chances are that they will complain about how slow moving their respective technical committees can be. But perhaps that is how fundamental tech building blocks should be – it is precisely the slow-moving (and arguably even technologically under-innovative) nature of platforms and protocols that provides the stability needed for fast-moving innovators to build on them. The critical societal question isn’t whether a particular protocol or web3 initiative will be innovative in and of itself, but whether any one or multiple such initiatives will serve as a foundation upon which multiple fast-moving innovations can be built, preferably using an architecture which supports a healthy ecosystem. The base elements (or ‘protocols’) don’t necessarily need to be fast moving themselves – they just need to have the right architecture to induce innovations on top of them. 

In this light, as Azeem Azhar has noted, a couple of the more interesting web3 initiatives are those that are trying to use crypto-currency-based compensation schemes to create a market mechanism for services, thus tackling problems that web2 companies had previously failed to solve. One example is Helium, which is a network of wifi hotspots, and another is Ethereum Swarm, which is creating a distributed personal storage system. Both of these ideas had been tried a decade or two ago but never gained the expected popularity, and they are now being reborn with a web3 foundation and incentive system. Indeed, as it tends to, technology may have advanced far enough today to make them successful.

My last critique of Moxie’s article is that he contends that any web2 interface to web3 infrastructure will inevitably lead to immense power concentration for the web2 service provider, due to the winner-takes-all nature of web2. I would contend that it does not need to be that way, and we can point to the cloud infrastructure services market as evidence. This may be seem like a counter-intuitive example, given the dominance of big-tech and especially Amazon’s AWS of the cloud infrastructure market, but the dynamics of this market are vastly different from the b2c markets that are dominated by the same big-tech companies (Google, Amazon, and Microsoft). Despite every effort by these big-tech usual suspects to try and provide proprietary add-ons to their cloud services so as to lock in their customers, they are ultimately offering services on a core open-source tech stack. This means that they are competing on a relatively level playing field to offer their services, knowing that each of the thousands of businesses that have thrived on their infrastructure can get up and leave to a competing cloud provider. The customers are not locked in by the network effects that are typically seen in b2c offerings. That is clear from the rich ecosystem of companies that have thrived on these platforms. Furthermore, not only can competitors in the cloud infrastructure market take on various niche portions of this giant market, but new entrants like Cloudflare and Scaleway can also contemplate competing head-on.. This competition, which is enabled by the existence of a core open-source tech stack, keeps even the most dominant service providers honest(!) as their customers continue to be king. There is no better evidence for that than the vibrancy of the ecosystems built on top of these services, and in stark contrast to the consumer world where the lack of interoperability and the strength of the lock-ins provide immense barriers to entry. Given a similar architecture, there is no reason these same dynamics can’t be transposed to the personal server space and the b2c market.

Yet, by going in with the assumption that such a thing is impossible, Moxie misses the opportunity to think through what new architectural solutions are possible by combining web3 elements with our existing technological interactions, and whether such new architectures could enable strong enough competition and portability, to curb the winner takes-all dynamics of the current b2c consumer web services market. 

This brings us back to my pet peeve of personal servers – a concept that has been around for more than a decade, and that the tech world has come to believe will never work. The question is: have recent developments in tech fundamentally shifted the landscape to make the original ideal of web1 viable again. My view is that the stars may be finally lining up, and that such an architecture is indeed possible.

✨ Web 3

A first star may be the launch of Ethereum Swarm, “a system of peer-to-peer networked nodes that create a decentralised storage”. As mentioned, Swarm uses Ethereum smart contracts as a basis of an incentive system for node participants to store data. It is quintessentially web3. Yet, it acts as a core infrastructure layer, on which anything can be built. So, the Fair Data Society, a related organization, built fairdrive, a web2 based gateway to access this storage infrastructure – a key building block for allowing other web2 based applications on top of it. Moxie’s blog post would argue that any such web2 interface would reconcentrate power in the hands of the same web2 service provider. But that really depends on what is built within and on top of that gateway – the architecture that lays on top of the foundational storage. If the data stored is in an easily readable format, based on non-proprietary and commonly used standards; and if there are multiple competing gateways to access this data, allowing anyone to switch providers within a minute or two, then there is no reason for those web2 interface service providers to be able to concentrate undue power. 

So how could these two elements come together – the data format and the portability / switch-ability?

🤩 Web 2 Switch-ability

As mentioned above, the competition among b2b cloud infrastructure providers has continued to allow for immense value to accrue to their customers. Until now, these customers have been other businesses that use the cloud. Even so, the cloud providers have done such a good job providing better and better solutions for these customers, which are ever easier to deploy, that such solutions have become almost easy enough to be also deployed for consumers. So not only can one easily envisage a world where multiple service providers compete by providing the best possible personal cloud services to consumers, one does not even need to wait for that. Today, it takes just a few clicks to create a new server on Heroku or on glitch.com or a myriad of other services. Anyone can easily set up their own server within a few minutes. This bodes well for a leading edge of tech-savvy consumers to do exactly that! 

But then what? What would you put on those servers? What data, and in what format? How can you make sure that such data is compatible across server types, and that such servers are interoperable (and switch-able), wherever they may sit?

💫 Web1 and the Personal Server Stack

A first step towards such interoperability is the CEPS initiative, which came out of the 2019 mydata.org conference and aimed to define a set of Common Endpoints for Personal Servers and datastores so that the same app can communicate to different types of personal servers using the same url endpoints. (ie the app only needs to know the base url of each user’s server to communicate with it, rather than create a new API for every server type.) With CEPS, any app developer can store a person’s app data on that person’s personal storage space, as long as the storage space has a CEPS compatible interface. CEPS also starts to define how different CEPS-compatible servers can share data with each other, for example to send a message, or to give access to a piece of data, or to publish something and make it publicly accessible. This data, “users’ data” – sitting on their personal servers is assumed to be stored in nosql data tables associated with each app. And whether the data is sitting in flat files or a cloud-based data base, it can easily be downloaded by its owner and moved somewhere else without losing its cohesiveness. This ensures that ‘user-data’ is indeed easily portable and so the ‘user’ or ‘data-owner’ can easily switch services – ie that the service provider doesn’t have a lock-in on the data-owner.

A second step would be to also store the apps themselves on the personal data space. Code is data after all, and so, having our apps be served from other persons’ servers seems incompatible with the aim of controlling our own data environments. It would leave too much room for app providers to gain the kind of power Moxie has warned us against. These apps, like one’s data, also need to be in a readable format and transportable across servers and mediums. Luckily, since the advent of web1, we have all been using such apps on a daily basis – these are the html, css and javascript text files that together make up each and every web page. Instead of having the app-providers host these files, these files can also be stored on each person’s personal storage space and served from there. Then each data-owner would have control over their data, as well as the app itself. The use of such an old standard not only ensures easy portability of the apps, but it also means that thousands of developers, even novices, would be able to build apps for this environment, or to convert their existing web-apps to work in that environment. It also implies that the server-layer itself plays a very small role, and has less of an opportunity to exert its dominance.

I started this essay by claiming that people “may very well want to have their own personal servers in the cloud, if these were easy enough to install and maintain, and if they had a rich eco-system of apps running on them.” I have tried to depict an environment which may have a chance of meeting this criteria.  If we start by converting our existing web-apps to this architecture, we may be able to use the web3 foundation of Swarm to forge a path towards the web1 ideals of controlling our web environment and data, all with the ease-of-use and the ease-of-development which we have gotten used to from web2.

🌹 Any Other Name

So then, the only problem remaining would be the name ‘Personal Server’… because Moxie may be right on that too: after all these years of false starts, it has become such a truism that no one would ever want a ‘personal server’, that the term itself may be too tainted now for anyone to want to run one.. so perhaps we should just rename ‘personal servers’ to “Serverless Application Platforms”.

____________________

Note: freezr is my own implementation of a personal server (ahem.. Serverless Application Platform), consistent with the architecture laid out above.

I will giving a demo of freezr at the We are Millions hackathon on March 10th.



...
labels:
>>
Posted by salman on 1/1/2022

I modified NeDB for freezr so it can use async storage mediums, like AWS S3 and or personal storage spaces like dropbox. The code is on github, and npmjs.

Each new storage system can have a js file that emulates the 16 or so functions required to integrate that storage system into nedb-asyncfs. A number of examples (like dbfs_aws.js) are provided under the env folder on github. Then, to initiate the db, you call the file as such:

const CustomFS = require('../path/to/dbfs_EXAMPLE.js')

const db = new Datastore({ dbFileName, customFS: new CustomFS(fsParams)})

where dbFileName is the name of the db, and fsParams are the specific credentials that the storage system requires. For example, for aws, fsParams could equal:

{
accessKeyId:'11aws_access_key11',
secretAccessKey: '22_secret22'
}

To make this work, I moved all the file system operations out of storage.js and persistence.js to dbfs_EXAMPLE.js (defaulting to dbfs_local.js which replicates the original nedb functionality), and made two main (interrelated) conceptual changes to the NeDB code:

1. appendfile - This is a critical part of NeDB but the function doesn't exist on cloud storage APIs, so the only way to 'append' a new record would be to download the whole db file and then add the new record to the file and then re-write the whole thing to storage. Doing that on every db update is obviously hugely inefficient. So instead, I did something a little different:  Instead of appending a new record to the end of the db file (eg 'testdb.db'), for every new record, I create a small file with that one record and write it to a folder (called '~testdb.db', following the NeDB naming convention of using ~). This makes the write operation acceptably fast, and I think it provides good redundancy. Afterwards, when a db file is crashsafe-written, all the small record-files in the folder are removed.  Similarly, loading a database entails reading the main dbname file plus all the little files in the ~testdb.db folder, and then appending all the records to the main file in the order of the time they were written.

2. doNotPersistOnLoad - it also turns out that persisting a database takes a long time, so it is quite annoying to persist every time you load the db, since it slows down the loading process considerably... So I added a donotperistOnLoad option. By default the behaviour is like NeDB now, but in practice you would only want to manage persisting the db at the application level... eg it makes more sense to have the application call 'persistence.compactDatafile()' when the server is less busy. 

Of course, latency is an issue in general, and for example, I had to add a bunch of setTimeOuts to the tests for them to work... mostly because deleting files (specially multiple files, can take a bit of time, so reading the db right after deleting the 'record files' doesnt work. and I also increased the timeout on the tests. Still, with a few exceptions below, all the tests passed for s3, google Drive and dropbox. Some notes on the testing:

  • testThrowInCallback' and 'testRightOrder' fail and I couldnt figure out what the issue is with it. They even fail when dbfs_local is used. I commented out those tests and noted 'TEST REMOVED'
  • ‘TTL indexes can expire multiple documents and only what needs to be expired’ was also removed => TOO MANY TIMING ISSUES
  • I also removed (and marked) 3 tests in persistence.test.js as the tests didn't make sense for async I believe.
  • I also added a few fs tests to test different file systems.
  • To run tests with new file systems, you can add the dbfs_example.js file under the env folder, add a file called '.example_credentials.js' with the required credentials and finally adjust the params.js file to detect and use those credentials.

I made one other general change to the functionality: I don't think empty lines should be viewed as errors. In the regular NeDB, empty lines are considered errors but the corruptItems count starts at -1. I thought it was better to not count empty lines as errors, but start the corruptItems count  at 0. (See persistence.js) So I added a line to persisence to ignore lines that are just '/n'

Finally, nedb-asyncfs also updates the dependencies. underscore is updated to the latest version, as the latest under nedb had some vulnerabilities. I also moved binary-search-tree inside the nedb code base, which is admittedly ugly but works. (binary-search-tree was created by Louis Chariot for nedb, and the alternative would have been to also fork and publish that as well.) 


...
labels:
>>
Posted by salman on 7/2/2020

vulog is a chrome extension that allows you to (1) bookmark web pages, highlight text on those pages, and take notes, (2) save your browsing history,  and (3) see the cookies tracking you on various web sites (and delete them). 

I wrote the first version of vulog 3 years ago to keep a log of all my web pages. It seemed to me that all the large tech companies were keeping track of my browsing history, and the only person who didn't have a full log was me! I wanted my browsing history sitting on my own personal server so that I can retain it for myself and do what I want with it.

At the time, I had also added some basic bookmarking functions on vulog, but I have been wanting to extend those features and make them much more useful:

  1. Keyboard only - Most extensions are accessed via a button next to the browser url bar. I wanted to make it faster and easier to add bookmarks and notes by using the keyboard alone. So now you can do that by pressing 'cntrl s', or 'cmd s' on a mac. (Who uses 'cntrl s' to save copies of web pages these days anyways? )
  2. Highlighting - I wanted to be able to highlight text and save those highlights. This can now be done by right clicking on highlighted text (thanks to Jérôme).
  3. inbox - I wanted to have a special bookmark called 'inbox' and to add items to that inbox by  right clicking on any link.

So these are all now implemented in the new vulog here:

https://chrome.google.com/webstore/detail/vulog-logger-bookmarker-h/peoooghegmfpgpafglhhibeeeeggmfhb

The code is all on github.

This post is supposed to be a live document with the following sections:

  1. Known Issues
  2. Instructions
  3. Privacy (CEPS)
  4. Future developments
  5. Acknowledgements

1. Known Issues

Here are some known problems and deficiencies with vulog :

  • cntl/cmd s doesn't work on all sites, specially those that make extensive use of javascript or which have menus with high z-indices. ;)
  • Highlighting - On some web page, vulog cant find the text you have highlighted. It should work on most simple sites but not on interactive ones where content is always changing. But you can always see your highlights by pressing the extension button.
  • The notes and tags functionality has a bug in the current version, thanks to my clumsy fingers changing a function call name just before submitting it to the app store. But you can always take notes  This is fixed in the new version.


2. Instructions

Current tab

Click on the vulog button to see the main "Current" tab, and tag a page or bookmark it using these buttons:

- The 'bookmark' and 'star are buttons for regular bookmarking.

- The 'Inbox' button is for items you want to read later. You can also right click on any web link on web pages you visit and add it to your vulog inbox right from the web page.

- Links marked with 'archive' do not show in default search results when you do a search from the Marks tab.  For example, once you have read a page from your inbox,  you might want to remove the 'inbox' mark, and add it to your 'archive'.

- The 'bullfrog' button makes the link public. Note that you need a CEPS compatible server to store your data and to publish it, if you want to use this feature. (See Below)

Marks tab

In the Marks tab, you can search for items you have bookmarked.

Click on the bookmark icons to filter your results. (eg clicking on inbox turns the icon green and only shows items that have been marked 'inbox'. Clicking it again will turn the button red, and you will only see items that have NOT been marked 'inbox'. You will notice that the 'archive' mark is red by default, so that archived items do not appear in the default search results.

In the marks tab, you can search for the items you have bookmarked.

When clicking on bookmark buttons, you will filter your results. (eg clicking on inbox turns the icon green and only shows items that have been marked 'inbox'. Clicking it again will turn the button red, and you will only see items that have NOT been marked as inbox. You will notice that the 'archive' mark is red by default, so that archived items do not appear in the default search results.

History tab

Search your history. The general search box searches for words used in your tags and notes and highlights, as well as meta data associated with the page.

Right Clicking on web pages

On any web page, you can right click on text you have selected to highlight it, and you can right click on a any link to add it to your inbox

Cntrl/Cmd S on web pages

When you are on any web page, you can press cntrl-S (or cmd-S for mac) and a small menu appears on the top right corner of the web page, to allow you to bookmark it. While the menu is open, pressing cntrl/cmd-I adds to inbox,  cntrl/cmd-A archives, cntrl/cmd-B adds a bookmark, and pressing cntrl/cmd-S again adds a star. You can remove marks by clicking on them with your mouse. The Escape key gets rid of the menu, which disappears automatically after a few seconds in any case.

Data storage

Your bookmarks and browser history is kept in the chrome's local storage, which has limited space. After some weeks (or months depending on usage), vulog automatically deletes older items. 

3. Privacy (CEPS)

vulog doesn't send any of your data to any outside servers, and you can always delete your data from the 'More' tab. If you want to store your data on your own server, you will need to set up a Personal Data Store. vulog was built to be able to accept CEPS-compatible data stores. (See here for more details on CEPS - Common End Points for Personal Servers and data stores. ) 

Having your data sit on your personal data store also means that you can publish your bookmarks and highlights and notes. Press the bullhorn button to publish the link from your server. 

4. Future Developments

I expect to use vulog as an example app for the development of the CEPS sharing protocol.

5. Acknowledgements

Highlighting functionality was largely copied from Jérôme Parent-Lévesque. (See here.)

Rendering function (dgelements.js) was inspired by David Gilbertson (who never expected someone would be crazy enough to actually implement his idea I think.)



...
labels:
>>
Posted by salman on 3/15/2020

CEPS provides a way for applications to work with multiple data stores. For developers, this means that you can create a new app knowing that it can run on various compliant datastore systems. For Personal Data Store (PDS) system providers, it means that you can have that many more apps to offer to users of your data store. If CEPS is adopted widely, the personal data store ecosystem can only be enriched.

Today, a number of different personal data store systems are pursuing similar ends – to grant users full control over their personal data, effectively freeing them from the current web services model where third party web sites and applications are retaining all our personal data. Yet, today, each of these PDSs has its own proprietary technology and methods to allow third parties to build apps running on those data-stores. 

This is a paradox that can only slow down the adoption of PDSs:

  • As a user, why should I jump off the rock of current proprietary web services model to land in another hard place where apps are still proprietary (even if I get more control over my data on those PDSs.)  If I am assured that I have full portability to new data stores I will have more confidence to join the ecosystem,
  • As a developer, why should I build a new app that runs solely on one type of data store? If my app could easily work with any one of multiple data stores, I would be much more prone to building apps. 

In this light, CEPS is the start of an effort to create some economies of scale in this nascent industry. 

In its current inception, CEPS has a minimum viable set of functions to run basic apps on PDSs. It allows the app to authenticate itself on the PDS, and then write records, read and query them, and update or delete the app’s own records.

Here is how it works in practice. In the video, you see a desktop app – in this case a note taking app called Notery, but it could have also been a mobile phone app. The app connects to my PDS which is in the cloud,  uses it as its store of data. Any mobile app or desktop application that you can think of could use the same model. They don’t need to send your data to some server you have no control over – using CEPS, they can store your data on your own data store.

This second clip is similar. It is an app called Tallyzoo, with which you can record and count various things. It also connects to my server and keeps data there. This is significant for two main reasons.

First, Tallyzoo wasn’t written by me. It is easy to connect some app to some server if the same person is writing both. But in this case, the app was written by Christoph from OwnYourData without any knowledge of my server. The only thing that Christoph knew was that my server would accept CEPS commands. And that’s all he needed to allow me to use Tallyzoo and store my Tallyzoo data on MY personal data store.

Second, the Tally Zoo app is a server based app – it is a web service. It is like all the great web sites we visit every day. It runs on a third party server and I am like any other user visiting a web site. The only difference is that Tallyzoo doesn’t keep my data on its own servers – it keeps the data on MY server. This is really significant in that it points to a model for all web sites to store our data on our data stores rather than on their servers.

This is a simple difference, and CEPS is a tiny little and simple specification. Yet the example above points to a world wide web which could be radically different from the one we interact with today. It shows that indeed, there is no reason for any web site – any third party company – to keep any of our data on their servers. 

This may be a world worth striving for.


...
labels: ceps freezr
com.salmanff.vulog

The developer has not defined a format for this record.

_id: "5e5c0dfd02d66f150a95833a"
vulog_mark_stars: ["fa-star","fa-bullhorn"]
vulog_mark_tags: []
vulog_mark_notes:
purl: https://medium.com/@apoikola/stop-speaking-about-personal-data-ownership-f487a2bd8475
url: https://medium.com/@apoikola/stop-speaking-about-personal-data-ownership-f487a2bd8475
domain_app: medium
type: article
title: Stop Speaking About Personal Data ‘Ownership’ - Antti Jogi Poikola - Medium
author: Antti Jogi Poikola
description: The convenience that many are looking from the ‘data ownership’ comes from the traditional economic understanding of property rights as residual rights. It means that we can contractually or otherwise
published: 2017-11-11T23:36:37.360Z
modified:
keywords: []
other: {"twitter:creator":"@apoikola","parsely-post-id":"f487a2bd8475"}
vulog_kword2: ["https","medium","com","apoikola","stop","speaking","about","personal","data","ownership","f487a2bd8475","article","2017","11","11t23","36","37","360z","‘ownership’","antti","jogi","poikola","the","convenience","that","many","are","looking","from","‘data","ownership’","comes","traditional","economic","understanding","of","property","rights","as","residual","it","means","we","can","contractually","or","otherwise"]
vulog_timestamp: 1566331097131
vulog_ttl_time: 175013
temp_unknown_tags: ["parsely-post-id"]
fj_modified_locally: 1566331115718
fj_deleted: false
_owner: salman
_app_table: com.salmanff.vulog.userMarks
__date_published: 8/20/2019
_original_id: 5d5c51179bad880015b3c2b9
_fields: null
Published by: salman on 8/20/2019
>>
Posted by salman on 6/23/2019
A description of the level playing field created by the personal server paradigm.
...
labels:
More