I have a LinkedIn profile that I rarely use. I regretted making an account early on because LinkedIn was notorious at sending you unwanted e-mails because you have to opt-out of dozens of different options sprinkled within their setting pages.
Once in a while I will log in because I still get (out of choice) e-mails when people want to connect or send me a message.
When I do log-in, I am reminded of that other reason I try and stay away from LinkedIn - privacy. Whatever fears you have of Facebook or Google, I found LinkedIn to be magnitudes more creepy.
The “People you may know” section particularly is pretty unsettling, as I only had e-mail contact with some people and we have no other connections. Sometimes I had a brief exchange over an e-mail thread and they are not actually in in my address book.
LinkedIn does allow you to import contacts from e-mail providers such as GMail. I want to say it is possible but extremely unlikely that I somehow allowed LinkedIn access to my GMail account. LinkedIn does have my GMail e-mail address because that is what I used to sign-up. Even if I was to entertain the idea that I did import my contacts using GMail, some of the people listed aren’t in my GMail address book.
The other possible situation is that some people imported their LinkedIn contacts and I happen to be in their address book. Based on email exchanges, this seems unlikely, but possible.
We make other tools available to sync information with our Services, and may also develop additional features that allow Members to use their account in conjunction with other third-party services. For example, our mobile applications allow you to sync your device’s calendar, email and/or contacts apps with our Services to show you the LinkedIn profiles of meeting attendees, email correspondents and/or your contacts.
Another example are software tools that allow you to see our and other public information about the people you email or meet with and leverage our Services to help you gain insights from and grow your network. If you grant these products (mobile applications or our other Services that sync external email and calendar services, such as “LinkedIn Connected”) permission to access your email and calendar accounts, they will access and may store some of your email header and calendar history information. Our products that sync with external email services may also temporarily cache message content for performance reasons, in a way that is unreadable by us and our service providers.
E-mail headers of course include the To and From fields. Of course you have to trust that they never read your e-mail contents ;)
Even though I personally did not use this tool to sync contacts, it’s pretty scary how many people did. I wonder if they knew what kind of access they were providing.
For the last 6 months or so I have been responsible for the build & deploy of a product at our company. Below is some of the things I have learned and I think is useful for any project.
Some key facts:
- We use GIT for our source control
- We use Nant to build, run tests, and package our software
- We use Jenkins and Puppet for continuous integration, deployment and configuration
We have a Jenkins job that continuously polls GIT for new commits and runs our CI. If new commits are found, it runs all the unit tests. If tests pass, we tag (copy the version number and git revision hash) and package our binaries, and copy them to our puppet environment. We then provision a VM and deploy/configure our software using puppet, validate our deployment using the /status page (explained in a bit), and run any acceptance tests. If the acceptance tests pass, we run another deployment to our dev environment.
Jenkins Build Flow
It is a good idea to break down various parts of your CI into multiple jobs. Such as one to package your software and one to run acceptance tests. This allows you, for example, to run acceptance tests independently without having to re-package your software. You then have to just order your jobs.
Out of the box, Jenkins comes with the ability to trigger new builds as part of a job. I did this earlier but found that Build Flow plugin to be a better tool. Here are my reasons for recommending it:
- Your entire process is done in one job. Instead of job A calling job B calling job C, it’s a job X calling A, then B, then C.
- I found it easier to reference build artifacts between jobs this way. Before, job B and C would copy the “latest successful build” artifacts of job A (you can imagine this leading to a race condition where a new successful build is done between job B and C). Now job B and C would receive the artifact reference by build parameters.
I only used the basic functionality, but I know other teams use more advanced features.
We actually did two iterations of our product. The first iteration I took a more hands-off approach. One thing that would have made my life easier early on was a status page to know whether everything is working correctly. The development team had more pressing (and interesting) problems to deal with, and I didn’t push hard enough. This was a mistake. Because it was manual process to validate that the code deployed actually work, it wasn’t done. We learned much later that what we deployed wasn’t working, at all, and at a much more critical time. This was stressful.
This mistake was not repeated for the second iteration as I took a “more dev-y less op-y” role.
A status page should check all your external key dependencies (is the database up? Can you authenticate? Can you write logs?) and report back any errors. Even if your status page is nearly empty, it still brings value - can all your binaries be loaded? Did you configure IIS correctly? A status page is low hanging fruit and brings in so much value.
In all likelihood you have a settings files. If you use a configuration management tool like Puppet (we use Puppet Hiera), then you know you need to update the hiera configurations before you deploy.
This is not always done.
It’s often easy to add a new configuration setting to a project and forget about the deployment aspect. For us, adding a new setting required changing four places: the hiera configuration for the acceptance test, dev, and qa environment, and a release note for another team to add the configuration setting to prod.
We use JSON as our application’s setting file. In our /status page, we are adding validation (using JSON schema) to make sure that all our required settings are there and no additional settings exist (for example, we have a configuration setting that we no longer need). The key here is adding it to the /status page will fail the deployment if we forget a configuration setting. This means we pick up errors at deployment time, not runtime.
I’ll hang my head in shame and admit that this is one of the last thing I tackled. So you finally deployed something that the /status page says is all fine and dandy, Awesome! Does it work? That’s a dev team problem ;)
Except it isn’t :’(
This step is difficult for a number of reasons. Software is complicated, has dependencies, and what should be in acceptance tests? For me, I decided to keep things simple. Work with the dev team to map out happy path scenarios and find ways to verify that things were done successfully. Eventually we may hit situations where this can’t be done (reliably, or at all), but we’ll cross that bridge when we get there.
When you are in that thin layer between devs and prod/various environments, issues go to you first. We often take for granted how easy it is to place a breakpoint, or check the log, or get a nice stack trace in Visual Studio when things go badly. When things do go badly in dev/qa environments, and you had to do something silly to debug it, raise it as an issue. Catch and resolve them as quickly as possible before the issue hits production - where it may be much more difficult to debug because you don’t have access to the machine.
If you go somewhere where vegetarians are not common (ie. rural Portugal), the common understanding is that vegetarians do not eat meat or fish. While this is true, it’s an oversimplification. This leads to the misunderstanding where chicken stock in soup is suitable for vegetarians because there is no meat in it (my grandma nearly did this, until I asked what kind of stock she uses).
It’s not too surprising as people follow vegetarianism to varying degrees. Some people eat eggs and drink milk, some don’t eat eggs, some don’t drink milk but eat eggs. So they can say “I am vegetarian, but I don’t eat eggs”. And there is some people who say “I am vegetarian, but I eat fish” because they don’t know what a pescatarian is and further cause confusion.
There is even one vegan that eats oysters because reasons. Some people are strict and others are lax, and others make stuff up as they go along. Partly this is due to people’s motivations for becoming vegetarian, I think the other part is ignorance (see “no meat and fish” from above).
So what is a vegetarian? When I started being more strict and doing more research, I found the UK Vegetarian society to be a good source of informationa. They define a vegetarian as someone who:
“…lives on a diet of grains, pulses, nuts, seeds, vegetables and fruits with, or without, the use of dairy products and eggs. A vegetarian does not eat any meat, poultry, game, fish, shellfish* or by-products of slaughter.”
I think this is a pretty good definition. If explaining what a vegetarian is to someone, I would even just use the last sentence despite it’s limitations (where do insects fit into this?)
Though I understand that even with a more consistent understanding of what it is, there will always be diversity in practice - and that’s okay too!